Science.gov

Sample records for 26-degree forward-viewing cameras

  1. 126. AERIAL FORWARD VIEW OF ENCLOSED HURRICANE BOW WITH FLIGHT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    126. AERIAL FORWARD VIEW OF ENCLOSED HURRICANE BOW WITH FLIGHT DECK GUN MOUNTS REMOVED AND ANGLED FLIGHT DECK. 1 OCTOBER 1956. (NATIONAL ARCHIVES NO. 80-G-1001445) - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA

  2. Spectral imaging using forward-viewing spectrally encoded endoscopy.

    PubMed

    Zeidan, Adel; Yelin, Dvir

    2016-02-01

    Spectrally encoded endoscopy (SEE) enables miniature, small-diameter endoscopic probes for minimally invasive imaging; however, using the broadband spectrum to encode space makes color and spectral imaging nontrivial and challenging. By careful registration and analysis of image data acquired by a prototype of a forward-viewing dual channel spectrally encoded rigid probe, we demonstrate spectral and color imaging within a narrow cylindrical lumen. Spectral imaging of calibration cylindrical test targets and an ex-vivo blood vessel demonstrates high-resolution spatial-spectral imaging with short (10 μs/line) exposure times. PMID:26977348

  3. Illness prevention in the NHS five year forward view.

    PubMed

    Fuller, Sabrina

    2015-06-01

    Illness prevention is a priority for the NHS Mandate and the Five Year Forward View, and offers a means to maintain sustainable health and social care services in the context of an ageing population and the growth of behaviour-related illness. The National Institute for Health and Care Excellence guidance recommends a structured approach to embedding behaviour change interventions into clinical care, and effective implementation requires organisational support. This article describes how nurse leaders, managers and commissioners can ensure this implementation through setting objectives for staff, training and development, as well as supporting staff to adopt healthier lifestyles. PMID:26014792

  4. Spectral imaging using forward-viewing spectrally encoded endoscopy

    PubMed Central

    Zeidan, Adel; Yelin, Dvir

    2016-01-01

    Spectrally encoded endoscopy (SEE) enables miniature, small-diameter endoscopic probes for minimally invasive imaging; however, using the broadband spectrum to encode space makes color and spectral imaging nontrivial and challenging. By careful registration and analysis of image data acquired by a prototype of a forward-viewing dual channel spectrally encoded rigid probe, we demonstrate spectral and color imaging within a narrow cylindrical lumen. Spectral imaging of calibration cylindrical test targets and an ex-vivo blood vessel demonstrates high-resolution spatial-spectral imaging with short (10 μs/line) exposure times. PMID:26977348

  5. Comparison of Retroflexed and Forward Views for Colorectal Endoscopic Submucosal Dissection

    PubMed Central

    Fujihara, Shintaro; Kobara, Hideki; Mori, Hirohito; Goda, Yasuhiro; Chiyo, Taiga; Matsunaga, Tae; Nishiyama, Noriko; Ayaki, Maki; Yachida, Tatsuo; Masaki, Tsutomu

    2015-01-01

    Background: The use of a retroflexed view exposes the entire tumor surface, which is obscured in the forward view, and contributes to complete tumor resection when combined with forward views. However, the efficacy and safety of using the retroflexed view for colorectal endoscopic submucosal dissection (ESD) are poorly understood. Methods: In this study, we assessed the efficacy and safety of the retroflexed view in colorectal ESD. From April 2009 to December 2013, 130 colorectal tumors were examined in 128 patients treated with ESD. A total of 119 patients with a mean tumor size of 27.2 mm were enrolled in the study, and these patients were assigned to undergo colorectal ESD with or without a retroflexed view. Results: The use of retroflexion was successful in 84.2% of patients. There were no perforations in the study and no complications related to the use of retroflexed views. The mean procedure time was 103.6±55.8 min in the retroflexed group, as compared with 108.0±66.5 min in the forward view group. The mean procedure time for resecting tumors >40 mm was significantly shorter in the retroflexed group relative to the forward group. Additionally, the mean dissection speed per unit area was significantly faster in the retroflexed group, as compared with the forward group. Conclusions: Retroflexed views can be used to remove lesions >40 mm and shorten procedure times. Retroflexion may also contribute to an improved en bloc resection rate. PMID:26078705

  6. Miniaturized rapid scanning, forward-viewing catheterscope for optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Guay Lord, Robin; Boudoux, Caroline; Seibel, Eric J.; Ellerbee, Audrey K.

    2016-02-01

    Patients afflicted with bladder cancer undergo annual surveillance in the clinic with flexible white light cystoscopy (WLC). However, WLC lacks the sensitivity to detect all bladder tumors and provides no stage information. Optical coherence tomography (OCT) can overcome these limitations of WLC due to its ability to visualize subsurface details of the bladder wall, to stage cancers and to detect tumors otherwise invisible to WLC. A major challenge, however, to realizing OCT imaging during clinical cystoscopies is developing a forward-viewing OCT catheterscope capable of passing through the 2.4-mm working channel of a standard flexible cystoscope. Additionally, to aid in identifying new tumors, the OCT system must be fast enough to collect data over the surface of the bladder without significantly increasing the procedure time. We have developed the first rapid-scanning forward-viewing OCT catheterscope that uses scanning fiber technology and is suitable for integration into flexible cystoscopes. The scanning fiber scope has a resonance frequency exceeding 2 kHz, which enables rapid volumetric data collection at a rate of 12.5 Hz. We expand on our previous design of such a scope by miniaturizing the scope package to a diameter of 1.29 mm and a rigid length of 19 mm, making this the smallest such package for forward-viewing, scanning OCT scopes. We validate the imaging quality of our prototype scope using phantom and ex vivo pig bladder samples. The miniaturized, rapid-scanning OCT scope is a promising tool to enable early detection and staging of bladder cancer during flexible WLC.

  7. Forward-viewing radial-array echoendoscope for staging of colon cancer beyond the rectum

    PubMed Central

    Kongkam, Pradermchai; Linlawan, Sittikorn; Aniwan, Satimai; Lakananurak, Narisorn; Khemnark, Suparat; Sahakitrungruang, Chucheep; Pattanaarun, Jirawat; Khomvilai, Supakij; Wisedopas, Naruemon; Ridtitid, Wiriyaporn; Bhutani, Manoop S; Kullavanijaya, Pinit; Rerknimitr, Rungsun

    2014-01-01

    AIM: To evaluate feasibility of the novel forward-viewing radial-array echoendoscope for staging of colon cancer beyond rectum as the first series. METHODS: A retrospective study with prospectively entered database. From March 2012 to February 2013, a total of 21 patients (11 men) (mean age 64.2 years) with colon cancer beyond the rectum were recruited. The novel forward-viewing radial-array echoendoscope was used for ultrasonographic staging of colon cancer beyond rectum. Ultrasonographic T and N staging were recorded when surgical pathology was used as a gold standard. RESULTS: The mean time to reach the lesion and the mean time to complete the procedure were 3.5 and 7.1 min, respectively. The echoendoscope passed through the lesions in 13 patients (61.9%) and reached the cecum in 10 of 13 patients (76.9%). No adverse events were found. The lesions were located in the cecum (n = 2), ascending colon (n = 1), transverse colon (n = 2), descending colon (n = 2), and sigmoid colon (n = 14). The accuracy rate for T1 (n = 3), T2 (n = 4), T3 (n = 13) and T4 (n = 1) were 100%, 60.0%, 84.6% and 100%, respectively. The overall accuracy rates for the T and N staging of colon cancer were 81.0% and 52.4%, respectively. The accuracy rates among traversable lesions (n = 13) and obstructive lesions (n = 8) were 61.5% and 100%, respectively. Endoscopic ultrasound and computed tomography had overall accuracy rates of 81.0% and 68.4%, respectively. CONCLUSION: The echoendoscope is a feasible staging tool for colon cancer beyond rectum. However, accuracy of the echoendoscope needs to be verified by larger systematic studies. PMID:24627604

  8. Conversion of Blastomyces dermatitidis to the yeast form at 37 degrees C and 26 degrees C.

    PubMed Central

    Kane, J

    1984-01-01

    A partially defined agar medium, KT, has been developed and compared with brain heart infusion agar for the conversion of Blastomyces dermatitidis to the yeast form. On the KT medium, the mold form converted to a yeast form within 72 h of incubation at 37 degrees C or after 3 weeks at 26 degrees C. A nutritionally dependent dimorphism in B. dermatitidis was observed. Images PMID:6490843

  9. Neurosurgical hand-held optical coherence tomography (OCT) forward-viewing probe

    NASA Astrophysics Data System (ADS)

    Sun, Cuiru; Lee, Kenneth K. C.; Vuong, Barry; Cusimano, Michael; Brukson, Alexander; Mariampillai, Adrian; Standish, Beau A.; Yang, Victor X. D.

    2012-02-01

    A prototype neurosurgical hand-held optical coherence tomography (OCT) imaging probe has been developed to provide micron resolution cross-sectional images of subsurface tissue during open surgery. This new ergonomic hand-held probe has been designed based on our group's previous work on electrostatically driven optical fibers. It has been packaged into a catheter probe in the familiar form factor of the clinically accepted Bayonet shaped neurosurgical non-imaging Doppler ultrasound probes. The optical design was optimized using ZEMAX simulation. Optical properties of the probe were tested to yield an ~20 um spot size, 5 mm working distance and a 3.5 mm field of view. The scan frequency can be increased or decreased by changing the applied voltage. Typically a scan frequency of less than 60Hz is chosen to keep the applied voltage to less than 2000V. The axial resolution of the probe was ~15 um (in air) as determined by the OCT system. A custom-triggering methodology has been developed to provide continuous stable imaging, which is crucial for clinical utility. Feasibility of this probe, in combination with a 1310 nm swept source OCT system was tested and images are presented to highlight the usefulness of such a forward viewing handheld OCT imaging probe. Knowledge gained from this research will lay the foundation for developing new OCT technologies for endovascular management of cerebral aneurysms and transsphenoidal neuroendoscopic treatment of pituitary tumors.

  10. Forward-viewing resonant fiber-optic scanning endoscope of appropriate scanning speed for 3D OCT imaging

    PubMed Central

    Huo, Li; Xi, Jiefeng; Wu, Yicong; Li, Xingde

    2010-01-01

    A forward-viewing resonant fiber-optic endoscope of a scanning speed appropriate for a high-speed Fourier-domain optical coherence tomography (FD-OCT) system was developed to enable real-time, three-dimensional endoscopic OCT imaging. A new method was explored to conveniently tune the scanning frequency of a resonant fiber-optic scanner, by properly selecting the fiber-optic cantilever length, partially changing the mechanical property of the cantilever, and adding a weight to the cantilever tip. Systematic analyses indicated the resonant scanning frequency can be tuned over two orders of magnitude spanning from ~10Hz to ~kHz. Such a flexible scanning frequency range makes it possible to set an appropriate scanning speed of the endoscope to match the different A-scan rates of a variety of FD-OCT systems. A 2.4-mm diameter, 62.5-Hz scanning endoscope appropriate to work with a 40-kHz swept-source OCT (SS-OCT) system was developed and demonstrated for 3D OCT imaging of biological tissues. PMID:20639922

  11. Ex vivo optical coherence tomography imaging of larynx tissues using a forward-viewing resonant fiber-optic scanning endoscope

    NASA Astrophysics Data System (ADS)

    Cernat, R.; Zhang, Y. Y.; Bradu, A.; Tatla, T.; Tadrous, P. J.; Li, X. D.; Podoleanu, A. Gh.

    2012-01-01

    A miniature endoscope probe for forward viewing in a 50 kHz swept source optical coherence tomography (SS-OCT) configuration was developed. The work presented here is an intermediate step in our research towards in vivo endoscopic laryngeal cancer screening. The endoscope probe consists of a miniature tubular lead zirconate titanate (PZT) actuator, a single mode fiber (SMF) cantilever and a GRIN lens, with a diameter of 2.4 mm. The outer surface of the PZT actuator is divided into four quadrants that form two pairs of orthogonal electrodes (X and Y). When sinusoidal waves of opposite polarities are applied to one electrode pair, the PZT tube bends transversally with respect to the two corresponding quadrants, and the fiber optic cantilever is displaced perpendicular to the PZT tube. The cantilever's resonant frequency was found experimentally as 47.03 Hz. With the GRIN lens used, a lateral resolution of ~ 13 μm is expected. 2D en face spiral scanning pattern is achieved by adjusting the phase between the pairs of X and Y electrodes drive close to 90 degrees. Furthermore, we demonstrate the imaging capability of the probe by obtaining B-scan images of diseased larynx tissue and compare them with those obtained in a 1310 nm SS-OCT classical non-endoscopic system.

  12. In vivo wide-field reflectance/fluorescence imaging and polarization-sensitive optical coherence tomography of human oral cavity with a forward-viewing probe.

    PubMed

    Yoon, Yeoreum; Jang, Won Hyuk; Xiao, Peng; Kim, Bumju; Wang, Taejun; Li, Qingyun; Lee, Ji Youl; Chung, Euiheon; Kim, Ki Hean

    2015-02-01

    We report multimodal imaging of human oral cavity in vivo based on simultaneous wide-field reflectance/fluorescence imaging and polarization-sensitive optical coherence tomography (PS-OCT) with a forward-viewing imaging probe. Wide-field reflectance/fluorescence imaging and PS-OCT were to provide both morphological and fluorescence information on the surface, and structural and birefringent information below the surface respectively. The forward-viewing probe was designed to access the oral cavity through the mouth with dimensions of approximately 10 mm in diameter and 180 mm in length. The probe had field of view (FOV) of approximately 5.5 mm in diameter, and adjustable depth of field (DOF) from 2 mm to 10 mm by controlling numerical aperture (NA) in the detection path. This adjustable DOF was to accommodate both requirements for image-based guiding with high DOF and high-resolution, high-sensitivity imaging with low DOF. This multimodal imaging system was characterized by using a tissue phantom and a mouse model in vivo, and was applied to human oral cavity. Information of surface morphology and vasculature, and under-surface layered structure and birefringence of the oral cavity tissues was obtained. These results showed feasibility of this multimodal imaging system as a tool for studying oral cavity lesions in clinical applications. PMID:25780742

  13. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  14. Camera Mount for a Head-Up Display

    NASA Technical Reports Server (NTRS)

    Geoge, Wayne; Barnes, Monica; Johnson, Larry; Shelton, Kevin

    2007-01-01

    A mounting mechanism was designed and built to satisfy requirements specific to a developmental head-up display (HUD) to be used by pilots in a Boeing 757 airplane. This development was necessitated by the fact that although such mounting mechanisms were commercially available for other airplanes, there were none for the 757. The mounting mechanism supports a miniature electronic camera that provides a forward view. The mechanism was designed to be integrated with the other HUD instrumentation and to position the camera so that what is presented to the pilot is the image acquired by the camera, overlaid with alphanumeric and/or graphical symbols, from a close approximation of the pilot s natural forward perspective. The mounting mechanism includes an L-shaped mounting arm that can be adjusted easily to the pilot s perspective, without prior experience. The mounting mechanism is lightweight and flexible and presents little hazard to the pilot.

  15. Space Camera

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Nikon's F3 35mm camera was specially modified for use by Space Shuttle astronauts. The modification work produced a spinoff lubricant. Because lubricants in space have a tendency to migrate within the camera, Nikon conducted extensive development to produce nonmigratory lubricants; variations of these lubricants are used in the commercial F3, giving it better performance than conventional lubricants. Another spinoff is the coreless motor which allows the F3 to shoot 140 rolls of film on one set of batteries.

  16. Infrared Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  17. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  18. CCD Camera

    DOEpatents

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  19. Nikon Camera

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Nikon FM compact has simplification feature derived from cameras designed for easy, yet accurate use in a weightless environment. Innovation is a plastic-cushioned advance lever which advances the film and simultaneously switches on a built in light meter. With a turn of the lens aperture ring, a glowing signal in viewfinder confirms correct exposure.

  20. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  1. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  2. Vacuum Camera Cooler

    NASA Technical Reports Server (NTRS)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  3. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  4. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  5. Nanosecond frame cameras

    SciTech Connect

    Frank, A M; Wilkins, P R

    2001-01-05

    The advent of CCD cameras and computerized data recording has spurred the development of several new cameras and techniques for recording nanosecond images. We have made a side by side comparison of three nanosecond frame cameras, examining them for both performance and operational characteristics. The cameras include; Micro-Channel Plate/CCD, Image Diode/CCD and Image Diode/Film; combinations of gating/data recording. The advantages and disadvantages of each device will be discussed.

  6. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  7. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  8. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  9. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  10. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  11. Tower Camera Handbook

    SciTech Connect

    Moudry, D

    2005-01-01

    The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for comparison with the albedo that can be calculated from downward-looking radiometers, as well as some indication of present weather. Similarly, during spring time, the camera images show the changes in the ground albedo as the snow melts. The tower images are saved in hourly intervals. In addition, two other cameras, the skydeck camera in Barrow and the piling camera in Atqasuk, show the current conditions at those sites.

  12. Automated Camera Calibration

    NASA Technical Reports Server (NTRS)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  13. Microchannel plate streak camera

    DOEpatents

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  14. Microchannel plate streak camera

    DOEpatents

    Wang, C.L.

    1984-09-28

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (uv to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 keV x-rays.

  15. Microchannel plate streak camera

    DOEpatents

    Wang, C.L.

    1989-03-21

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras is disclosed. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1,000 KeV x-rays. 3 figs.

  16. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  17. Analytical multicollimator camera calibration

    USGS Publications Warehouse

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  18. Polarization encoded color camera.

    PubMed

    Schonbrun, Ethan; Möller, Guðfríður; Di Caprio, Giuseppe

    2014-03-15

    Digital cameras would be colorblind if they did not have pixelated color filters integrated into their image sensors. Integration of conventional fixed filters, however, comes at the expense of an inability to modify the camera's spectral properties. Instead, we demonstrate a micropolarizer-based camera that can reconfigure its spectral response. Color is encoded into a linear polarization state by a chiral dispersive element and then read out in a single exposure. The polarization encoded color camera is capable of capturing three-color images at wavelengths spanning the visible to the near infrared. PMID:24690806

  19. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  20. LSST Camera Optics Design

    SciTech Connect

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  1. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  2. The Camera Cook Book.

    ERIC Educational Resources Information Center

    Education Development Center, Inc., Newton, MA.

    Intended for use with the photographic materials available from the Workshop for Learning Things, Inc., this "camera cookbook" describes procedures that have been tried in classrooms and workshops and proven to be the most functional and inexpensive. Explicit starting off instructions--directions for exploring and loading the camera and for taking…

  3. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  4. CCD Luminescence Camera

    NASA Technical Reports Server (NTRS)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  5. Dry imaging cameras

    PubMed Central

    Indrajit, IK; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-01-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  6. 3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  7. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  8. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  9. Gamma camera purchasing.

    PubMed

    Wells, C P; Buxton-Thomas, M

    1995-03-01

    The purchase of a new gamma camera is a major undertaking and represents a long-term commitment for most nuclear medicine departments. The purpose of tendering for gamma cameras is to assess the best match between the requirements of the clinical department and the equipment available and not necessarily to buy the 'best camera' [1-3]. After many years of drawing up tender specifications, this paper tries to outline some of the traps and pitfalls of this potentially perilous, although largely rewarding, exercise. PMID:7770241

  10. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  11. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  12. Advanced CCD camera developments

    SciTech Connect

    Condor, A.

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  13. The MKID Camera

    NASA Astrophysics Data System (ADS)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  14. The Complementary Pinhole Camera.

    ERIC Educational Resources Information Center

    Bissonnette, D.; And Others

    1991-01-01

    Presents an experiment based on the principles of rectilinear motion of light operating in a pinhole camera that projects the image of an illuminated object through a small hole in a sheet to an image screen. (MDH)

  15. Miniature TV Camera

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Originally devised to observe Saturn stage separation during Apollo flights, Marshall Space Flight Center's Miniature Television Camera, measuring only 4 x 3 x 1 1/2 inches, quickly made its way to the commercial telecommunications market.

  16. Gamma ray camera

    SciTech Connect

    Robbins, C.D.; Wang, S.

    1980-09-09

    An anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the anger camera, the image intensifier tube having a negatively charged flat scintillator screen and a flat photocathode layer and a grounded, flat output phosphor display screen all of the same dimension (Unity image magnification) and all within a grounded metallic tube envelope and having a metallic, inwardly concaved input window between the scintillator screen and the collimator.

  17. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  18. 9. VIEW OF CAMERA STATIONS UNDER CONSTRUCTION INCLUDING CAMERA CAR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. VIEW OF CAMERA STATIONS UNDER CONSTRUCTION INCLUDING CAMERA CAR ON RAILROAD TRACK AND FIXED CAMERA STATION 1400 (BUILDING NO. 42021) ABOVE, ADJACENT TO STATE HIGHWAY 39, LOOKING WEST, March 23, 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  20. THE DARK ENERGY CAMERA

    SciTech Connect

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J.; Honscheid, K.; Abbott, T. M. C.; Bonati, M.; Antonik, M.; Brooks, D.; Ballester, O.; Cardiel-Sas, L.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Boprie, D.; Campa, J.; Castander, F. J.; Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  1. The CAMCAO infrared camera

    NASA Astrophysics Data System (ADS)

    Amorim, Antonio; Melo, Antonio; Alves, Joao; Rebordao, Jose; Pinhao, Jose; Bonfait, Gregoire; Lima, Jorge; Barros, Rui; Fernandes, Rui; Catarino, Isabel; Carvalho, Marta; Marques, Rui; Poncet, Jean-Marc; Duarte Santos, Filipe; Finger, Gert; Hubin, Norbert; Huster, Gotthard; Koch, Franz; Lizon, Jean-Louis; Marchetti, Enrico

    2004-09-01

    The CAMCAO instrument is a high resolution near infrared (NIR) camera conceived to operate together with the new ESO Multi-conjugate Adaptive optics Demonstrator (MAD) with the goal of evaluating the feasibility of Multi-Conjugate Adaptive Optics techniques (MCAO) on the sky. It is a high-resolution wide field of view (FoV) camera that is optimized to use the extended correction of the atmospheric turbulence provided by MCAO. While the first purpose of this camera is the sky observation, in the MAD setup, to validate the MCAO technology, in a second phase, the CAMCAO camera is planned to attach directly to the VLT for scientific astrophysical studies. The camera is based on the 2kx2k HAWAII2 infrared detector controlled by an ESO external IRACE system and includes standard IR band filters mounted on a positional filter wheel. The CAMCAO design requires that the optical components and the IR detector should be kept at low temperatures in order to avoid emitting radiation and lower detector noise in the region analysis. The cryogenic system inclues a LN2 tank and a sptially developed pulse tube cryocooler. Field and pupil cold stops are implemented to reduce the infrared background and the stray-light. The CAMCAO optics provide diffraction limited performance down to J Band, but the detector sampling fulfills the Nyquist criterion for the K band (2.2mm).

  2. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  3. The Dark Energy Camera

    SciTech Connect

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  4. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems. PMID:27410361

  5. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  6. The Dark Energy Camera

    NASA Astrophysics Data System (ADS)

    Flaugher, B.; Diehl, H. T.; Honscheid, K.; Abbott, T. M. C.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Antonik, M.; Ballester, O.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Bonati, M.; Boprie, D.; Brooks, D.; Buckley-Geer, E. J.; Campa, J.; Cardiel-Sas, L.; Castander, F. J.; Castilla, J.; Cease, H.; Cela-Ruiz, J. M.; Chappa, S.; Chi, E.; Cooper, C.; da Costa, L. N.; Dede, E.; Derylo, G.; DePoy, D. L.; de Vicente, J.; Doel, P.; Drlica-Wagner, A.; Eiting, J.; Elliott, A. E.; Emes, J.; Estrada, J.; Fausti Neto, A.; Finley, D. A.; Flores, R.; Frieman, J.; Gerdes, D.; Gladders, M. D.; Gregory, B.; Gutierrez, G. R.; Hao, J.; Holland, S. E.; Holm, S.; Huffman, D.; Jackson, C.; James, D. J.; Jonas, M.; Karcher, A.; Karliner, I.; Kent, S.; Kessler, R.; Kozlovsky, M.; Kron, R. G.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Lahav, O.; Lathrop, A.; Lee, J.; Levi, M. E.; Lewis, P.; Li, T. S.; Mandrichenko, I.; Marshall, J. L.; Martinez, G.; Merritt, K. W.; Miquel, R.; Muñoz, F.; Neilsen, E. H.; Nichol, R. C.; Nord, B.; Ogando, R.; Olsen, J.; Palaio, N.; Patton, K.; Peoples, J.; Plazas, A. A.; Rauch, J.; Reil, K.; Rheault, J.-P.; Roe, N. A.; Rogers, H.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Schurter, P.; Scott, L.; Serrano, S.; Shaw, T. M.; Smith, R. C.; Soares-Santos, M.; Stefanik, A.; Stuermer, W.; Suchyta, E.; Sypniewski, A.; Tarle, G.; Thaler, J.; Tighe, R.; Tran, C.; Tucker, D.; Walker, A. R.; Wang, G.; Watson, M.; Weaverdyck, C.; Wester, W.; Woods, R.; Yanny, B.; DES Collaboration

    2015-11-01

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel-1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6-9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  7. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  8. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  9. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  10. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  11. GROT in NICMOS Cameras

    NASA Astrophysics Data System (ADS)

    Sosey, M.; Bergeron, E.

    1999-09-01

    Grot is exhibited as small areas of reduced sensitivity, most likely due to flecks of antireflective paint scraped off the optical baffles as they were forced against each other. This paper characterizes grot associated with all three cameras. Flat field images taken from March 1997 through January 1999 have been investigated for changes in the grot, including possible wavelength dependency and throughput characteristics. The main products of this analysis are grot masks for each of the cameras which may also contain any new cold or dead pixels not specified in the data quality arrays.

  12. Wide angle pinhole camera

    NASA Technical Reports Server (NTRS)

    Franke, J. M.

    1978-01-01

    Hemispherical refracting element gives pinhole camera 180 degree field-of-view without compromising its simplicity and depth-of-field. Refracting element, located just behind pinhole, bends light coming in from sides so that it falls within image area of film. In contrast to earlier pinhole cameras that used water or other transparent fluids to widen field, this model is not subject to leakage and is easily loaded and unloaded with film. Moreover, by selecting glass with different indices of refraction, field at film plane can be widened or reduced.

  13. Artificial human vision camera

    NASA Astrophysics Data System (ADS)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  14. Snapshot polarimeter fundus camera.

    PubMed

    DeHoog, Edward; Luo, Haitao; Oka, Kazuhiko; Dereniak, Eustace; Schwiegerling, James

    2009-03-20

    A snapshot imaging polarimeter utilizing Savart plates is integrated into a fundus camera for retinal imaging. Acquired retinal images can be processed to reconstruct Stokes vector images, giving insight into the polarization properties of the retina. Results for images from a normal healthy retina and retinas with pathology are examined and compared. PMID:19305463

  15. Spas color camera

    NASA Technical Reports Server (NTRS)

    Toffales, C.

    1983-01-01

    The procedures to be followed in assessing the performance of the MOS color camera are defined. Aspects considered include: horizontal and vertical resolution; value of the video signal; gray scale rendition; environmental (vibration and temperature) tests; signal to noise ratios; and white balance correction.

  16. The LSST Camera Overview

    SciTech Connect

    Gilmore, Kirk; Kahn, Steven A.; Nordby, Martin; Burke, David; O'Connor, Paul; Oliver, John; Radeka, Veljko; Schalk, Terry; Schindler, Rafe; /SLAC

    2007-01-10

    The LSST camera is a wide-field optical (0.35-1um) imager designed to provide a 3.5 degree FOV with better than 0.2 arcsecond sampling. The detector format will be a circular mosaic providing approximately 3.2 Gigapixels per image. The camera includes a filter mechanism and, shuttering capability. It is positioned in the middle of the telescope where cross-sectional area is constrained by optical vignetting and heat dissipation must be controlled to limit thermal gradients in the optical beam. The fast, f/1.2 beam will require tight tolerances on the focal plane mechanical assembly. The focal plane array operates at a temperature of approximately -100 C to achieve desired detector performance. The focal plane array is contained within an evacuated cryostat, which incorporates detector front-end electronics and thermal control. The cryostat lens serves as an entrance window and vacuum seal for the cryostat. Similarly, the camera body lens serves as an entrance window and gas seal for the camera housing, which is filled with a suitable gas to provide the operating environment for the shutter and filter change mechanisms. The filter carousel can accommodate 5 filters, each 75 cm in diameter, for rapid exchange without external intervention.

  17. Jack & the Video Camera

    ERIC Educational Resources Information Center

    Charlan, Nathan

    2010-01-01

    This article narrates how the use of video camera has transformed the life of Jack Williams, a 10-year-old boy from Colorado Springs, Colorado, who has autism. The way autism affected Jack was unique. For the first nine years of his life, Jack remained in his world, alone. Functionally non-verbal and with motor skill problems that affected his…

  18. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  19. Communities, Cameras, and Conservation

    ERIC Educational Resources Information Center

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  20. Anger Camera Firmware

    2010-11-19

    The firmware is responsible for the operation of Anger Camera Electronics, calculation of position, time of flight and digital communications. It provides a first stage analysis of 48 signals from 48 analog signals that have been converted to digital values using A/D convertors.

  1. Make a Pinhole Camera

    ERIC Educational Resources Information Center

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  2. Advanced Virgo phase cameras

    NASA Astrophysics Data System (ADS)

    van der Schaaf, L.; Agatsuma, K.; van Beuzekom, M.; Gebyehu, M.; van den Brand, J.

    2016-05-01

    A century after the prediction of gravitational waves, detectors have reached the sensitivity needed to proof their existence. One of them, the Virgo interferometer in Pisa, is presently being upgraded to Advanced Virgo (AdV) and will come into operation in 2016. The power stored in the interferometer arms raises from 20 to 700 kW. This increase is expected to introduce higher order modes in the beam, which could reduce the circulating power in the interferometer, limiting the sensitivity of the instrument. To suppress these higher-order modes, the core optics of Advanced Virgo is equipped with a thermal compensation system. Phase cameras, monitoring the real-time status of the beam constitute a critical component of this compensation system. These cameras measure the phases and amplitudes of the laser-light fields at the frequencies selected to control the interferometer. The measurement combines heterodyne detection with a scan of the wave front over a photodetector with pin-hole aperture. Three cameras observe the phase front of these laser sidebands. Two of them monitor the in-and output of the interferometer arms and the third one is used in the control of the aberrations introduced by the power recycling cavity. In this paper the working principle of the phase cameras is explained and some characteristic parameters are described.

  3. Imaging phoswich anger camera

    NASA Astrophysics Data System (ADS)

    Manchanda, R. K.; Sood, R. K.

    1991-08-01

    High angular resolution and low background are the primary requisites for detectors for future astronomy experiments in the low energy gamma-ray region. Scintillation counters are still the only available large area detector for studies in this energy range. Preliminary details of a large area phoswich anger camera designed for coded aperture imaging is described and its background and position characteristics are discussed.

  4. Millisecond readout CCD camera

    NASA Astrophysics Data System (ADS)

    Prokop, Mark; McCurnin, Thomas W.; Stradling, Gary L.

    1993-01-01

    We have developed a prototype of a fast-scanning CCD readout system to record a 1024 X 256 pixel image and transport the image to a recording station within 1 ms of the experimental event. The system is designed to have a dynamic range of greater than 1000 with adequate sensitivity to read single-electron excitations of a CRT phosphor when amplified by a microchannel plate image intensifier. This readout camera is intended for recording images from oscilloscopes, streak, and framing cameras. The sensor is a custom CCD chip, designed by LORAL Aeroneutronics. This CCD chip is designed with 16 parallel output ports to supply the necessary image transfer speed. The CCD is designed as an interline structure to allow fast clearing of the image and on-chip fast sputtering. Special antiblooming provisions are also included. The camera is designed to be modular and to allow CCD chips of other sizes to be used with minimal reengineering of the camera head.

  5. Millisecond readout CCD camera

    NASA Astrophysics Data System (ADS)

    Prokop, M.; McCurnin, T. W.; Stradling, G.

    We have developed a prototype of a fast-scanning CCD readout system to record a 1024 x 256 pixel image and transport the image to a recording station within 1 ms of the experimental event. The system is designed to have a dynamic range of greater than 1000 with adequate sensitivity to read single-electron excitations of a CRT phosphor when amplified by a microchannel plate image intensifier. This readout camera is intended for recording images from oscilloscopes, streak, and framing cameras. The sensor is a custom CCD chip, designed by LORAL Aeroneutronics. This CCD chip is designed with 16 parallel output ports to supply the necessary image transfer speed. The CCD is designed as an interline structure to allow fast clearing of the image and on-chip fast shuttering. Special antiblooming provisions are also included. The camera is designed to be modular and to allow CCD chips of other sizes to be used with minimal reengineering of the camera head.

  6. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  7. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  8. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  9. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  10. Combustion pinhole camera system

    DOEpatents

    Witte, A.B.

    1984-02-21

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor. 2 figs.

  11. Combustion pinhole camera system

    DOEpatents

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  12. LSST Camera Optics

    SciTech Connect

    Olivier, S S; Seppala, L; Gilmore, K; Hale, L; Whistler, W

    2006-06-05

    The Large Synoptic Survey Telescope (LSST) is a unique, three-mirror, modified Paul-Baker design with an 8.4m primary, a 3.4m secondary, and a 5.0m tertiary feeding a camera system that includes corrector optics to produce a 3.5 degree field of view with excellent image quality (<0.3 arcsecond 80% encircled diffracted energy) over the entire field from blue to near infra-red wavelengths. We describe the design of the LSST camera optics, consisting of three refractive lenses with diameters of 1.6m, 1.0m and 0.7m, along with a set of interchangeable, broad-band, interference filters with diameters of 0.75m. We also describe current plans for fabricating, coating, mounting and testing these lenses and filters.

  13. NSTX Tangential Divertor Camera

    SciTech Connect

    A.L. Roquemore; Ted Biewer; D. Johnson; S.J. Zweben; Nobuhiro Nishino; V.A. Soukhanovskii

    2004-07-16

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor.

  14. Hemispherical Laue camera

    DOEpatents

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  15. Gamma ray camera

    DOEpatents

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  16. Gamma ray camera

    DOEpatents

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  17. Orbiter Camera Payload System

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Components for an orbiting camera payload system (OCPS) include the large format camera (LFC), a gas supply assembly, and ground test, handling, and calibration hardware. The LFC, a high resolution large format photogrammetric camera for use in the cargo bay of the space transport system, is also adaptable to use on an RB-57 aircraft or on a free flyer satellite. Carrying 4000 feet of film, the LFC is usable over the visible to near IR, at V/h rates of from 11 to 41 milliradians per second, overlap of 10, 60, 70 or 80 percent and exposure times of from 4 to 32 milliseconds. With a 12 inch focal length it produces a 9 by 18 inch format (long dimension in line of flight) with full format low contrast resolution of 88 lines per millimeter (AWAR), full format distortion of less than 14 microns and a complement of 45 Reseau marks and 12 fiducial marks. Weight of the OCPS as supplied, fully loaded is 944 pounds and power dissipation is 273 watts average when in operation, 95 watts in standby. The LFC contains an internal exposure sensor, or will respond to external command. It is able to photograph starfields for inflight calibration upon command.

  18. DEVICE CONTROLLER, CAMERA CONTROL

    1998-07-20

    This is a C++ application that is the server for the cameral control system. Devserv drives serial devices, such as cameras and videoswitchers used in a videoconference, upon request from a client such as the camxfgbfbx ccint program. cc Deverv listens on UPD ports for clients to make network contractions. After a client connects and sends a request to control a device (such as to pan,tilt, or zooma camera or do picture-in-picture with a videoswitcher),more » devserv formats the request into an RS232 message appropriate for the device and sends this message over the serial port to which the device is connected. Devserv then reads the reply from the device from the serial port to which the device is connected. Devserv then reads the reply from the device from the serial port and then formats and sends via multicast a status message. In addition, devserv periodically multicasts status or description messages so that all clients connected to the multicast channel know what devices are supported and their ranges of motion and the current position. The software design employs a class hierarchy such that an abstract base class for devices can be subclassed into classes for various device categories(e.g. sonyevid30, cononvco4, panasonicwjmx50, etc.). which are further subclassed into classes for various device categories. The devices currently supported are the Sony evi-D30, Canon, VCC1, Canon VCC3, and Canon VCC4 cameras and the Panasonic WJ-MX50 videoswitcher. However, developers can extend the class hierarchy to support other devices.« less

  19. Adaptive compressive sensing camera

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  20. DEVICE CONTROLLER, CAMERA CONTROL

    SciTech Connect

    Perry, Marcia

    1998-07-20

    This is a C++ application that is the server for the cameral control system. Devserv drives serial devices, such as cameras and videoswitchers used in a videoconference, upon request from a client such as the camxfgbfbx ccint program. cc Deverv listens on UPD ports for clients to make network contractions. After a client connects and sends a request to control a device (such as to pan,tilt, or zooma camera or do picture-in-picture with a videoswitcher), devserv formats the request into an RS232 message appropriate for the device and sends this message over the serial port to which the device is connected. Devserv then reads the reply from the device from the serial port to which the device is connected. Devserv then reads the reply from the device from the serial port and then formats and sends via multicast a status message. In addition, devserv periodically multicasts status or description messages so that all clients connected to the multicast channel know what devices are supported and their ranges of motion and the current position. The software design employs a class hierarchy such that an abstract base class for devices can be subclassed into classes for various device categories(e.g. sonyevid30, cononvco4, panasonicwjmx50, etc.). which are further subclassed into classes for various device categories. The devices currently supported are the Sony evi-D30, Canon, VCC1, Canon VCC3, and Canon VCC4 cameras and the Panasonic WJ-MX50 videoswitcher. However, developers can extend the class hierarchy to support other devices.

  1. Neutron imaging camera

    NASA Astrophysics Data System (ADS)

    Hunter, S. D.; de Nolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-04-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3_DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, ~0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. The performance of the NIC from laboratory is presented.

  2. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  3. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  4. HONEY -- The Honeywell Camera

    NASA Astrophysics Data System (ADS)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  5. WFOV star tracker camera

    SciTech Connect

    Lewis, I.T. ); Ledebuhr, A.G.; Axelrod, T.S.; Kordas, J.F.; Hills, R.F. )

    1991-04-01

    A prototype wide-field-of-view (WFOV) star tracker camera has been fabricated and tested for use in spacecraft navigation. The most unique feature of this device is its 28{degrees} {times} 44{degrees} FOV, which views a large enough sector of the sky to ensure the existence of at least 5 stars of m{sub v} = 4.5 or brighter in all viewing directions. The WFOV requirement and the need to maximize both collection aperture (F/1.28) and spectral input band (0.4 to 1.1 {mu}m) to meet the light gathering needs for the dimmest star have dictated the use of a novel concentric optical design, which employs a fiber optic faceplate field flattener. The main advantage of the WFOV configuration is the smaller star map required for position processing, which results in less processing power and faster matching. Additionally, a size and mass benefit is seen with a larger FOV/smaller effective focal length (efl) sensor. Prototype hardware versions have included both image intensified and un-intensified CCD cameras. Integration times of {le} 50 msec have been demonstrated with both the intensified and un-intensified versions. 3 refs., 16 figs.

  6. NFC - Narrow Field Camera

    NASA Astrophysics Data System (ADS)

    Koukal, J.; Srba, J.; Gorková, S.

    2015-01-01

    We have been introducing a low-cost CCTV video system for faint meteor monitoring and here we describe the first results from 5 months of two-station operations. Our system called NFC (Narrow Field Camera) with a meteor limiting magnitude around +6.5mag allows research on trajectories of less massive meteoroids within individual parent meteor showers and the sporadic background. At present 4 stations (2 pairs with coordinated fields of view) of NFC system are operated in the frame of CEMeNt (Central European Meteor Network). The heart of each NFC station is a sensitive CCTV camera Watec 902 H2 and a fast cinematographic lens Meopta Meostigmat 1/50 - 52.5 mm (50 mm focal length and fixed aperture f/1.0). In this paper we present the first results based on 1595 individual meteors, 368 of which were recorded from two stations simultaneously. This data set allows the first empirical verification of theoretical assumptions for NFC system capabilities (stellar and meteor magnitude limit, meteor apparent brightness distribution and accuracy of single station measurements) and the first low mass meteoroid trajectory calculations. Our experimental data clearly showed the capabilities of the proposed system for low mass meteor registration and for calculations based on NFC data to lead to a significant refinement in the orbital elements for low mass meteoroids.

  7. PAU camera: detectors characterization

    NASA Astrophysics Data System (ADS)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  8. MEMS digital camera

    NASA Astrophysics Data System (ADS)

    Gutierrez, R. C.; Tang, T. K.; Calvet, R.; Fossum, E. R.

    2007-02-01

    MEMS technology uses photolithography and etching of silicon wafers to enable mechanical structures with less than 1 μm tolerance, important for the miniaturization of imaging systems. In this paper, we present the first silicon MEMS digital auto-focus camera for use in cell phones with a focus range of 10 cm to infinity. At the heart of the new silicon MEMS digital camera, a simple and low-cost electromagnetic actuator impels a silicon MEMS motion control stage on which a lens is mounted. The silicon stage ensures precise alignment of the lens with respect to the imager, and enables precision motion of the lens over a range of 300 μm with < 5 μm hysteresis and < 2 μm repeatability. Settling time is < 15 ms for 200 μm step, and < 5ms for 20 μm step enabling AF within 0.36 sec at 30 fps. The precise motion allows COTS optics to maintain MTF > 0.8 at 20 cy/mm up to 80% field over the full range of motion. Accelerated lifetime testing has shown that the alignment and precision of motion is maintained after 8,000 g shocks, thermal cycling from - 40 C to 85 C, and operation over 20 million cycles.

  9. Stereoscopic camera design

    NASA Astrophysics Data System (ADS)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  10. Transmission electron microscope CCD camera

    DOEpatents

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  11. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  12. A Motionless Camera

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  13. Camera Calibration for Uav Application Using Sensor of Mobile Camera

    NASA Astrophysics Data System (ADS)

    Takahashi, Y.; Chikatsu, H.

    2015-05-01

    Recently, 3D measurements using small unmanned aerial vehicles (UAVs) have increased in Japan, because small type UAVs is easily available at low cost and the analysis software can be created the easily 3D models. However, small type UAVs have a problem: they have very short flight times and a small payload. In particular, as the payload of a small type UAV increases, its flight time decreases. Therefore, it is advantageous to use lightweight sensors in small type UAVs. A mobile camera is lightweight and has many sensors such as an accelerometer, a magnetic field, and a gyroscope. Moreover, these sensors can be used simultaneously. Therefore, the authors think that the problems of small UAVs can be solved using the mobile camera. The authors executed camera calibration using a test target for evaluating sensor values measured using a mobile camera. Consequently, the authors confirmed the same accuracy with normal camera calibration.

  14. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  15. Radiation camera motion correction system

    DOEpatents

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  16. Airborne ballistic camera tracking systems

    NASA Technical Reports Server (NTRS)

    Redish, W. L.

    1976-01-01

    An operational airborne ballistic camera tracking system was tested for operational and data reduction feasibility. The acquisition and data processing requirements of the system are discussed. Suggestions for future improvements are also noted. A description of the data reduction mathematics is outlined. Results from a successful reentry test mission are tabulated. The test mission indicated that airborne ballistic camera tracking systems are feasible.

  17. Camera artifacts in IUE spectra

    NASA Technical Reports Server (NTRS)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  18. Multi-PSPMT scintillation camera

    SciTech Connect

    Pani, R.; Pellegrini, R.; Trotta, G.; Scopinaro, F.; Soluri, A.; Vincentis, G. de; Scafe, R.; Pergola, A.

    1999-06-01

    Gamma ray imaging is usually accomplished by the use of a relatively large scintillating crystal coupled to either a number of photomultipliers (PMTs) (Anger Camera) or to a single large Position Sensitive PMT (PSPMT). Recently the development of new diagnostic techniques, such as scintimammography and radio-guided surgery, have highlighted a number of significant limitations of the Anger camera in such imaging procedures. In this paper a dedicated gamma camera is proposed for clinical applications with the aim of improving image quality by utilizing detectors with an appropriate size and shape for the part of the body under examination. This novel scintillation camera is based upon an array of PSPMTs (Hamamatsu R5900-C8). The basic concept of this camera is identical to the Anger Camera with the exception of the substitution of PSPMTs for the PMTs. In this configuration it is possible to use the high resolution of the PSPMTs and still correctly position events lying between PSPMTs. In this work the test configuration is a 2 by 2 array of PSPMTs. Some advantages of this camera are: spatial resolution less than 2 mm FWHM, good linearity, thickness less than 3 cm, light weight, lower cost than equivalent area PSPMT, large detection area when coupled to scintillating arrays, small dead boundary zone (< 3 mm) and flexibility in the shape of the camera.

  19. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  20. The "All Sky Camera Network"

    ERIC Educational Resources Information Center

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites. Meteorites have great…

  1. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  2. Lights, Camera, Courtroom? Should Trials Be Televised?

    ERIC Educational Resources Information Center

    Kirtley, Jane E.; Brothers, Thomas W.; Veal, Harlan K.

    1999-01-01

    Presents three differing perspectives from American Bar Association members on whether television cameras should be allowed in the courtroom. Contends that cameras should be allowed with differing degrees of certainty: cameras truly open the courts to the public; cameras must be strategically placed; and cameras should be used only with the…

  3. Proportional counter radiation camera

    DOEpatents

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  4. Camera sensitivity study

    NASA Astrophysics Data System (ADS)

    Schlueter, Jonathan; Murphey, Yi L.; Miller, John W. V.; Shridhar, Malayappan; Luo, Yun; Khairallah, Farid

    2004-12-01

    As the cost/performance Ratio of vision systems improves with time, new classes of applications become feasible. One such area, automotive applications, is currently being investigated. Applications include occupant detection, collision avoidance and lane tracking. Interest in occupant detection has been spurred by federal automotive safety rules in response to injuries and fatalities caused by deployment of occupant-side air bags. In principle, a vision system could control airbag deployment to prevent this type of mishap. Employing vision technology here, however, presents a variety of challenges, which include controlling costs, inability to control illumination, developing and training a reliable classification system and loss of performance due to production variations due to manufacturing tolerances and customer options. This paper describes the measures that have been developed to evaluate the sensitivity of an occupant detection system to these types of variations. Two procedures are described for evaluating how sensitive the classifier is to camera variations. The first procedure is based on classification accuracy while the second evaluates feature differences.

  5. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  6. An Inexpensive Digital Infrared Camera

    ERIC Educational Resources Information Center

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  7. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  8. The future of consumer cameras

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  9. Astronomy and the camera obscura

    NASA Astrophysics Data System (ADS)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  10. Picosecond (picoframe) framing camera evaluations.

    PubMed

    Liu, Y; Sibbett, W; Walker, D R

    1992-03-01

    Detailed theoretical evaluations of picoframe-I- and II-type framing cameras are presented, and predicted performance characteristics are compared with experimental results. The methods of theoretical simulations are described, and a suite of computer programs was developed. The theoretical analyses indicate that the existence of fringe fields in the vicinity of the deflectors is the main factor that limits the dynamic spatial resolutions and frame times of these particular designs of framing camera, and possible refinements are outlined. PMID:20720702

  11. Science, conservation, and camera traps

    USGS Publications Warehouse

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  12. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  13. A forward view on reliable computers for flight control

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Wensley, J. H.

    1976-01-01

    The requirements for fault-tolerant computers for flight control of commercial aircraft are examined; it is concluded that the reliability requirements far exceed those typically quoted for space missions. Examination of circuit technology and alternative computer architectures indicates that the desired reliability can be achieved with several different computer structures, though there are obvious advantages to those that are more economic, more reliable, and, very importantly, more certifiable as to fault tolerance. Progress in this field is expected to bring about better computer systems that are more rigorously designed and analyzed even though computational requirements are expected to increase significantly.

  14. CARTOGAM: a portable gamma camera

    NASA Astrophysics Data System (ADS)

    Gal, O.; Izac, C.; Lainé, F.; Nguyen, A.

    1997-02-01

    The gamma camera is devised to establish the cartography of radioactive sources against a visible background in quasi real time. This device is designed to spot sources from a distance during the preparation of interventions on active areas of nuclear installations. This implement will permit to optimize interventions especially on the dosimetric level. The camera consists of a double cone collimator, a scintillator and an intensified CCD camera. This chain of detection provides the formation of both gamma images and visible images. Even though it is wrapped in a denal shield, the camera is still portable (mass < 15 kg) and compact (external diameter = 8 cm). The angular resolution is of the order of one degree for gamma rays of 1 MeV. In a few minutes, the device is able to measure a dose rate of 10 μGy/h delivered for instance by a source of 60Co of 90 mCi located at 10 m from the detector. The first images recorded in the laboratory will be presented and will illustrate the performances obtained with this camera.

  15. The Clementine longwave infrared camera

    SciTech Connect

    Priest, R.E.; Lewis, I.T.; Sewall, N.R.; Park, H.S.; Shannon, M.J.; Ledebuhr, A.G.; Pleasance, L.D.; Massie, M.A.; Metschuleit, K.

    1995-04-01

    The Clementine mission provided the first ever complete, systematic surface mapping of the moon from the ultra-violet to the near-infrared regions. More than 1.7 million images of the moon, earth and space were returned from this mission. The longwave-infrared (LWIR) camera supplemented the UV/Visible and near-infrared mapping cameras providing limited strip coverage of the moon, giving insight to the thermal properties of the soils. This camera provided {approximately}100 m spatial resolution at 400 km periselene, and a 7 km across-track swath. This 2.1 kg camera using a 128 x 128 Mercury-Cadmium-Telluride (MCT) FPA viewed thermal emission of the lunar surface and lunar horizon in the 8.0 to 9.5 {micro}m wavelength region. A description of this light-weight, low power LWIR camera along with a summary of lessons learned is presented. Design goals and preliminary on-orbit performance estimates are addressed in terms of meeting the mission`s primary objective for flight qualifying the sensors for future Department of Defense flights.

  16. Medium format cameras used by NASA astronauts

    NASA Technical Reports Server (NTRS)

    Amsbury, David; Bremer, Jeff

    1989-01-01

    The medium format cameras and other hardware used for photographing the earth from the Space Shuttle are discussed. Illustrations and descriptions are given for the two types of cameras used for most earth photography, the NASA-modified Hasselblad 500 EL/M 70-mm cameras and the Linhof AeroTechnika 45 camera. Also, the data recording modules used on Space Shuttle missions and a mounting device to produce simultaneous photography using two cameras are examined.

  17. The GISMO-2 Bolometer Camera

    NASA Technical Reports Server (NTRS)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  18. Perceptual Color Characterization of Cameras

    PubMed Central

    Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo

    2014-01-01

    Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586

  19. Dark Energy Camera for Blanco

    SciTech Connect

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  20. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  1. Stratoscope 2 integrating television camera

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The development, construction, test and delivery of an integrating television camera for use as the primary data sensor on Flight 9 of Stratoscope 2 is described. The system block diagrams are presented along with the performance data, and definition of the interface of the telescope with the power, telemetry, and communication system.

  2. Making Films without a Camera.

    ERIC Educational Resources Information Center

    Cox, Carole

    1980-01-01

    Describes draw-on filmmaking as an exciting way to introduce children to the plastic, fluid nature of the film medium, to develop their appreciation and understanding of divergent cinematic techniques and themes, and to invite them into the dream world of filmmaking without the need for a camera. (AEA)

  3. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1989-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.

  4. High speed multiwire photon camera

    NASA Technical Reports Server (NTRS)

    Lacy, Jeffrey L. (Inventor)

    1991-01-01

    An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.

  5. Measuring Distances Using Digital Cameras

    ERIC Educational Resources Information Center

    Kendal, Dave

    2007-01-01

    This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…

  6. Camera assisted multimodal user interaction

    NASA Astrophysics Data System (ADS)

    Hannuksela, Jari; Silvén, Olli; Ronkainen, Sami; Alenius, Sakari; Vehviläinen, Markku

    2010-01-01

    Since more processing power, new sensing and display technologies are already available in mobile devices, there has been increased interest in building systems to communicate via different modalities such as speech, gesture, expression, and touch. In context identification based user interfaces, these independent modalities are combined to create new ways how the users interact with hand-helds. While these are unlikely to completely replace traditional interfaces, they will considerably enrich and improve the user experience and task performance. We demonstrate a set of novel user interface concepts that rely on built-in multiple sensors of modern mobile devices for recognizing the context and sequences of actions. In particular, we use the camera to detect whether the user is watching the device, for instance, to make the decision to turn on the display backlight. In our approach the motion sensors are first employed for detecting the handling of the device. Then, based on ambient illumination information provided by a light sensor, the cameras are turned on. The frontal camera is used for face detection, while the back camera provides for supplemental contextual information. The subsequent applications triggered by the context can be, for example, image capturing, or bar code reading.

  7. Gamma-ray camera flyby

    SciTech Connect

    2010-01-01

    Animation based on an actual classroom demonstration of the prototype CCI-2 gamma-ray camera's ability to image a hidden radioactive source, a cesium-137 line source, in three dimensions. For more information see http://newscenter.lbl.gov/feature-stories/2010/06/02/applied-nuclear-physics/.

  8. The Camera Comes to Court.

    ERIC Educational Resources Information Center

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  9. High-speed pulse camera

    NASA Technical Reports Server (NTRS)

    Lawson, J. R.

    1968-01-01

    Miniaturized, 16 mm high speed pulse camera takes spectral photometric photographs upon instantaneous command. The design includes a low-friction, low-inertia film transport, a very thin beryllium shutter driven by a low-inertia stepper motor for minimum actuation time after a pulse command, and a binary encoder.

  10. Recent advances in digital camera optics

    NASA Astrophysics Data System (ADS)

    Ishiguro, Keizo

    2012-10-01

    The digital camera market has extremely expanded in the last ten years. The zoom lens for digital camera is especially the key determining factor of the camera body size and image quality. Its technologies have been based on several analog technological progresses including the method of aspherical lens manufacturing and the mechanism of image stabilization. Panasonic is one of the pioneers of both technologies. I will introduce the previous trend in optics of zoom lens as well as original optical technologies of Panasonic digital camera "LUMIX", and in addition optics in 3D camera system. Besides, I would like to suppose the future trend in digital cameras.

  11. Combustion pinhole-camera system

    DOEpatents

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  12. A 10-microm infrared camera.

    PubMed

    Arens, J F; Jernigan, J G; Peck, M C; Dobson, C A; Kilk, E; Lacy, J; Gaalema, S

    1987-09-15

    An IR camera has been built at the University of California at Berkeley for astronomical observations. The camera has been used primarily for high angular resolution imaging at mid-IR wavelengths. It has been tested at the University of Arizona 61- and 90-in. telescopes near Tucson and the NASA Infrared Telescope Facility on Mauna Kea, HI. In the observations the system has been used as an imager with interference coated and Fabry-Perot filters. These measurements have demonstrated a sensitivity consistent with photon shot noise, showing that the system is limited by the radiation from the telescope and atmosphere. Measurements of read noise, crosstalk, and hysteresis have been made in our laboratory. PMID:20490151

  13. Electronographic cameras for space astronomy.

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  14. ISO camera array development status

    NASA Technical Reports Server (NTRS)

    Sibille, F.; Cesarsky, C.; Agnese, P.; Rouan, D.

    1989-01-01

    A short outline is given of the Infrared Space Observatory Camera (ISOCAM), one of the 4 instruments onboard the Infrared Space Observatory (ISO), with the current status of its two 32x32 arrays, an InSb charge injection device (CID) and a Si:Ga direct read-out (DRO), and the results of the in orbit radiation simulation with gamma ray sources. A tentative technique for the evaluation of the flat fielding accuracy is also proposed.

  15. Graphic design of pinhole cameras

    NASA Technical Reports Server (NTRS)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  16. SPEIR: A Ge Compton Camera

    SciTech Connect

    Mihailescu, L; Vetter, K M; Burks, M T; Hull, E L; Craig, W W

    2004-02-11

    The SPEctroscopic Imager for {gamma}-Rays (SPEIR) is a new concept of a compact {gamma}-ray imaging system of high efficiency and spectroscopic resolution with a 4-{pi} field-of-view. The system behind this concept employs double-sided segmented planar Ge detectors accompanied by the use of list-mode photon reconstruction methods to create a sensitive, compact Compton scatter camera.

  17. 21 CFR 886.1120 - Opthalmic camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  18. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  19. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 1 2012-01-01 2012-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  20. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 1 2014-01-01 2014-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  1. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  2. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the...

  3. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk...

  4. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  5. Coaxial fundus camera for opthalmology

    NASA Astrophysics Data System (ADS)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  6. Passive Millimeter Wave Camera (PMMWC) at TRW

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Engineers at TRW, Redondo Beach, California, inspect the Passive Millimeter Wave Camera, a weather-piercing camera designed to see through fog, clouds, smoke and dust. Operating in the millimeter wave portion of the electromagnetic spectrum, the camera creates visual-like video images of objects, people, runways, obstacles and the horizon. A demonstration camera (shown in photo) has been completed and is scheduled for checkout tests and flight demonstration. Engineer (left) holds a compact, lightweight circuit board containing 40 complete radiometers, including antenna, monolithic millimeter wave integrated circuit (MMIC) receivers and signal processing and readout electronics that forms the basis for the camera's 1040-element focal plane array.

  7. Passive Millimeter Wave Camera (PMMWC) at TRW

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Engineers at TRW, Redondo Beach, California, inspect the Passive Millimeter Wave Camera, a weather-piercing camera designed to 'see' through fog, clouds, smoke and dust. Operating in the millimeter wave portion of the electromagnetic spectrum, the camera creates visual-like video images of objects, people, runways, obstacles and the horizon. A demonstration camera (shown in photo) has been completed and is scheduled for checkout tests and flight demonstration. Engineer (left) holds a compact, lightweight circuit board containing 40 complete radiometers, including antenna, monolithic millimeter wave integrated circuit (MMIC) receivers and signal processing and readout electronics that forms the basis for the camera's 1040-element focal plane array.

  8. HHEBBES! All sky camera system: status update

    NASA Astrophysics Data System (ADS)

    Bettonvil, F.

    2015-01-01

    A status update is given of the HHEBBES! All sky camera system. HHEBBES!, an automatic camera for capturing bright meteor trails, is based on a DSLR camera and a Liquid Crystal chopper for measuring the angular velocity. Purpose of the system is to a) recover meteorites; b) identify origin/parental bodies. In 2015, two new cameras were rolled out: BINGO! -alike HHEBBES! also in The Netherlands-, and POgLED, in Serbia. BINGO! is a first camera equipped with a longer focal length fisheye lens, to further increase the accuracy. Several minor improvements have been done and the data reduction pipeline was used for processing two prominent Dutch fireballs.

  9. CCD video camera and airborne applications

    NASA Astrophysics Data System (ADS)

    Sturz, Richard A.

    2000-11-01

    The human need to see for ones self and to do so remotely, has given rise to video camera applications never before imagined and growing constantly. The instant understanding and verification offered by video lends its applications to every facet of life. Once an entertainment media, video is now ever present in out daily life. The application to the aircraft platform is one aspect of the video camera versatility. Integrating the video camera into the aircraft platform is yet another story. The typical video camera when applied to more standard scene imaging poses less demanding parameters and considerations. This paper explores the video camera as applied to the more complicated airborne environment.

  10. Spectrometry with consumer-quality CMOS cameras.

    PubMed

    Scheeline, Alexander

    2015-01-01

    Many modern spectrometric instruments use diode arrays, charge-coupled arrays, or CMOS cameras for detection and measurement. As portable or point-of-use instruments are desirable, one would expect that instruments using the cameras in cellular telephones and tablet computers would be the basis of numerous instruments. However, no mass market for such devices has yet developed. The difficulties in using megapixel CMOS cameras for scientific measurements are discussed, and promising avenues for instrument development reviewed. Inexpensive alternatives to use of the built-in camera are also mentioned, as the long-term question is whether it is better to overcome the constraints of CMOS cameras or to bypass them.

  11. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  12. Initial laboratory evaluation of color video cameras

    SciTech Connect

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  13. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  14. Characterization of the Series 1000 Camera System

    SciTech Connect

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  15. PDX infrared TV camera system

    SciTech Connect

    Jacobsen, R.A.

    1981-08-01

    An infrared TV camera system has been developed for use on PDX. This system is capable of measuring the temporal and spatial energy deposition on the limiters and divertor neutralizer plates; time resolutions of 1 ms are achievable. The system has been used to measure the energy deposition on the PDX neutralizer plates and the temperature jump of limiter surfaces during a pulse. The energy scrapeoff layer is found to have characteristic dimensions of the order of a cm. The measurement of profiles is very sensitive to variations in the thermal emissivity of the surfaces.

  16. Cryogenic mechanism for ISO camera

    NASA Astrophysics Data System (ADS)

    Luciano, G.

    1987-12-01

    The Infrared Space Observatory (ISO) camera configuration, architecture, materials, tribology, motorization, and development status are outlined. The operating temperature is 2 to 3 K, at 2.5 to 18 microns. Selected material is a titanium alloy, with MoS2/TiC lubrication. A stepping motor drives the ball-bearing mounted wheels to which the optical elements are fixed. Model test results are satisfactory, and also confirm the validity of the test facilities, particularly for vibration tests at 4K.

  17. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  18. Optimising camera traps for monitoring small mammals.

    PubMed

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  19. Volcano surveillance using infrared cameras

    NASA Astrophysics Data System (ADS)

    Spampinato, Letizia; Calvari, Sonia; Oppenheimer, Clive; Boschi, Enzo

    2011-05-01

    Volcanic eruptions are commonly preceded, accompanied, and followed by variations of a number of detectable geophysical and geochemical manifestations. Many remote sensing techniques have been applied to tracking anomalies and eruptive precursors, and monitoring ongoing volcanic eruptions, offering obvious advantages over in situ techniques especially during hazardous activity. Whilst spaceborne instruments provide a distinct advantage for collecting data remotely in this regard, they still cannot match the spatial detail or time resolution achievable using portable imagers on the ground or aircraft. Hand-held infrared camera technology has advanced significantly over the last decade, resulting in a proliferation of commercially available instruments, such that volcano observatories are increasingly implementing them in monitoring efforts. Improved thermal surveillance of active volcanoes has not only enhanced hazard assessment but it has contributed substantially to understanding a variety of volcanic processes. Drawing on over a decade of operational volcano surveillance in Italy, we provide here a critical review of the application of infrared thermal cameras to volcano monitoring. Following a summary of key physical principles, instrument capabilities, and the practicalities and methods of data collection, we discuss the types of information that can be retrieved from thermal imagery and what they have contributed to hazard assessment and risk management, and to physical volcanology. With continued developments in thermal imager technology and lower instrument costs, there will be increasing opportunity to gather valuable observations of volcanoes. It is thus timely to review the state of the art and we hope thereby to stimulate further research and innovation in this area.

  20. Toward the camera rain gauge

    NASA Astrophysics Data System (ADS)

    Allamano, P.; Croci, A.; Laio, F.

    2015-03-01

    We propose a novel technique based on the quantitative detection of rain intensity from images, i.e., from pictures taken in rainy conditions. The method is fully analytical and based on the fundamentals of camera optics. A rigorous statistical framing of the technique allows one to obtain the rain rate estimates in terms of expected values and associated uncertainty. We show that the method can be profitably applied to real rain events, and we obtain promising results with errors of the order of ±25%. A precise quantification of the method's accuracy will require a more systematic and long-term comparison with benchmark measures. The significant step forward with respect to standard rain gauges resides in the possibility to retrieve measures at very high temporal resolution (e.g., 30 measures per minute) at a very low cost. Perspective applications include the possibility to dramatically increase the spatial density of rain observations by exporting the technique to crowdsourced pictures of rain acquired with cameras and smartphones.

  1. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  2. Laboratory calibration and characterization of video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1990-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of nonperpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitably aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  3. Laboratory Calibration and Characterization of Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1989-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of non-perpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitable aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  4. Television camera video level control system

    NASA Technical Reports Server (NTRS)

    Kravitz, M.; Freedman, L. A.; Fredd, E. H.; Denef, D. E. (Inventor)

    1985-01-01

    A video level control system is provided which generates a normalized video signal for a camera processing circuit. The video level control system includes a lens iris which provides a controlled light signal to a camera tube. The camera tube converts the light signal provided by the lens iris into electrical signals. A feedback circuit in response to the electrical signals generated by the camera tube, provides feedback signals to the lens iris and the camera tube. This assures that a normalized video signal is provided in a first illumination range. An automatic gain control loop, which is also responsive to the electrical signals generated by the camera tube 4, operates in tandem with the feedback circuit. This assures that the normalized video signal is maintained in a second illumination range.

  5. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  6. Multiplex imaging with multiple-pinhole cameras

    NASA Technical Reports Server (NTRS)

    Brown, C.

    1974-01-01

    When making photographs in X rays or gamma rays with a multiple-pinhole camera, the individual images of an extended object such as the sun may be allowed to overlap. Then the situation is in many ways analogous to that in a multiplexing device such as a Fourier spectroscope. Some advantages and problems arising with such use of the camera are discussed, and expressions are derived to describe the relative efficacy of three exposure/postprocessing schemes using multiple-pinhole cameras.

  7. Electrostatic camera system functional design study

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Cook, F. J.; Moore, R. F.

    1972-01-01

    A functional design study for an electrostatic camera system for application to planetary missions is presented. The electrostatic camera can produce and store a large number of pictures and provide for transmission of the stored information at arbitrary times after exposure. Preliminary configuration drawings and circuit diagrams for the system are illustrated. The camera system's size, weight, power consumption, and performance are characterized. Tradeoffs between system weight, power, and storage capacity are identified.

  8. The CTIO CCD-TV acquisition camera

    NASA Astrophysics Data System (ADS)

    Walker, Alistair R.; Schmidt, Ricardo

    A prototype CCD-TV camera has been built at CTIO, conceptually similar to the cameras in use at Lick Observatory. A GEC CCD is used as the detector, cooled thermo-electrically to -45C. Pictures are displayed via an IBM PC clone computer and an ITI image display board. Results of tests at the CTIO telescopes are discussed, including comparisons with the RCA ISIT cameras used at present for acquisition and guiding.

  9. Recent advances in MPEG-7 cameras

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ebrahimi, Touradj

    2006-08-01

    We propose a smart camera which performs video analysis and generates an MPEG-7 compliant stream. By producing a content-based metadata description of the scene, the MPEG-7 camera extends the capabilities of conventional cameras. The metadata is then directly interpretable by a machine. This is especially helpful in a number of applications such as video surveillance, augmented reality and quality control. As a use case, we describe an algorithm to identify moving objects and produce the corresponding MPEG-7 description. The algorithm runs in real-time on a Matrox Iris P300C camera.

  10. True-color night vision cameras

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason; Gat, Nahum

    2007-04-01

    This paper describes True-Color Night Vision cameras that are sensitive to the visible to near-infrared (V-NIR) portion of the spectrum allowing for the "true-color" of scenes and objects to be displayed and recorded under low-light-level conditions. As compared to traditional monochrome (gray or green) night vision imagery, color imagery has increased information content and has proven to enable better situational awareness, faster response time, and more accurate target identification. Urban combat environments, where rapid situational awareness is vital, and marine operations, where there is inherent information in the color of markings and lights, are example applications that can benefit from True-Color Night Vision technology. Two different prototype cameras, employing two different true-color night vision technological approaches, are described and compared in this paper. One camera uses a fast-switching liquid crystal filter in front of a custom Gen-III image intensified camera, and the second camera is based around an EMCCD sensor with a mosaic filter applied directly to the sensor. In addition to visible light, both cameras utilize NIR to (1) increase the signal and (2) enable the viewing of laser aiming devices. The performance of the true-color cameras, along with the performance of standard (monochrome) night vision cameras, are reported and compared under various operating conditions in the lab and the field. In addition to subjective criterion, figures of merit designed specifically for the objective assessment of such cameras are used in this analysis.

  11. Omnidirectional underwater camera design and calibration.

    PubMed

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  12. Omnidirectional underwater camera design and calibration.

    PubMed

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-03-12

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  13. High-speed cameras at Los Alamos

    NASA Astrophysics Data System (ADS)

    Brixner, Berlyn

    1997-05-01

    In 1943, there was no camera with the microsecond resolution needed for research in Atomic Bomb development. We had the Mitchell camera (100 fps), the Fastax (10 000), the Marley (100 000), the drum streak (moving slit image) 10-5 s resolution, and electro-optical shutters for 10-6 s. Julian Mack invented a rotating-mirror camera for 10-7 s, which was in use by 1944. Small rotating mirror changes secured a resolution of 10-8 s. Photography of oscilloscope traces soon recorded 10-6 resolution, which was later improved to 10-8 s. Mack also invented two time resolving spectrographs for studying the radiation of the first atomic explosion. Much later, he made a large aperture spectrograph for shock wave spectra. An image dissecting drum camera running at 107 frames per second (fps) was used for studying high velocity jets. Brixner invented a simple streak camera which gave 10-8 s resolution. Using a moving film camera, an interferometer pressure gauge was developed for measuring shock-front pressures up to 100 000 psi. An existing Bowen 76-lens frame camera was speeded up by our turbine driven mirror to make 1 500 000 fps. Several streak cameras were made with writing arms from 4 1/2 to 40 in. and apertures from f/2.5 to f/20. We made framing cameras with top speeds of 50 000, 1 000 000, 3 500 000, and 14 000 000 fps.

  14. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  15. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  16. Depth estimation using a lightfield camera

    NASA Astrophysics Data System (ADS)

    Roper, Carissa

    The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.

  17. Wind dynamic range video camera

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    A television camera apparatus is disclosed in which bright objects are attenuated to fit within the dynamic range of the system, while dim objects are not. The apparatus receives linearly polarized light from an object scene, the light being passed by a beam splitter and focused on the output plane of a liquid crystal light valve. The light valve is oriented such that, with no excitation from the cathode ray tube, all light is rotated 90 deg and focused on the input plane of the video sensor. The light is then converted to an electrical signal, which is amplified and used to excite the CRT. The resulting image is collected and focused by a lens onto the light valve which rotates the polarization vector of the light to an extent proportional to the light intensity from the CRT. The overall effect is to selectively attenuate the image pattern focused on the sensor.

  18. Gesture recognition on smart cameras

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Chevobbe, Stephane; Darouich, Mehdi

    2013-02-01

    Gesture recognition is a feature in human-machine interaction that allows more natural interaction without the use of complex devices. For this reason, several methods of gesture recognition have been developed in recent years. However, most real time methods are designed to operate on a Personal Computer with high computing resources and memory. In this paper, we analyze relevant methods found in the literature in order to investigate the ability of smart camera to execute gesture recognition algorithms. We elaborate two hand gesture recognition pipelines. The first method is based on invariant moments extraction and the second on finger tips detection. The hand detection method used for both pipeline is based on skin color segmentation. The results obtained show that the un-optimized versions of invariant moments method and finger tips detection method can reach 10 fps on embedded processor and use about 200 kB of memory.

  19. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  20. Explosive Transient Camera (ETC) Program

    NASA Technical Reports Server (NTRS)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  1. LROC - Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A.; Malin, M. C.; Ravine, M. A.; Thomas, P. C.; Turtle, E. P.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera (WAC) to provide images at a scale of 100 m per pixel in five visible wavelength bands (415, 566, 604, 643, and 689 nm) and 400 m per pixel in two ultraviolet bands (321 nm and 360 nm) from the nominal 50 km orbit. Early operations were designed to test the performance of the cameras under all nominal operating conditions and provided a baseline for future calibrations. Test sequences included off-nadir slews to image stars and the Earth, 90° yaw sequences to collect flat field calibration data, night imaging for background characterization, and systematic mapping to test performance. LRO initially was placed into a terminator orbit resulting in images acquired under low signal conditions. Over the next three months the incidence angle at the spacecraft’s equator crossing gradually decreased towards high noon, providing a range of illumination conditions. Several hundred south polar images were collected in support of impact site selection for the LCROSS mission; details can be seen in many of the shadows. Commissioning phase images not only proved the instruments’ overall performance was nominal, but also that many geologic features of the lunar surface are well preserved at the meter-scale. Of particular note is the variety of impact-induced morphologies preserved in a near pristine state in and around kilometer-scale and larger young Copernican age impact craters that include: abundant evidence of impact melt of a variety of rheological properties, including coherent flows with surface textures and planimetric properties reflecting supersolidus (e.g., liquid melt) emplacement, blocks delicately perched on

  2. Camera processing with chromatic aberration.

    PubMed

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected. PMID:25163060

  3. HRSC: High resolution stereo camera

    USGS Publications Warehouse

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  4. Fundamental study on identification of CMOS cameras

    NASA Astrophysics Data System (ADS)

    Kurosawa, Kenji; Saitoh, Naoki

    2003-08-01

    In this study, we discussed individual camera identification of CMOS cameras, because CMOS (complementary-metal-oxide-semiconductor) imaging detectors have begun to make their move into the CCD (charge-coupled-device) fields for recent years. It can be identified whether or not the given images have been taken with the given CMOS camera by detecting the imager's intrinsic unique fixed pattern noise (FPN) just like the individual CCD camera identification method proposed by the authors. Both dark and bright pictures taken with the CMOS cameras can be identified by the method, because not only dark current in the photo detectors but also MOS-FET amplifiers incorporated in each pixel may produce pixel-to-pixel nonuniformity in sensitivity. Each pixel in CMOS detectors has the amplifier, which degrades image quality of bright images due to the nonuniformity of the amplifier gain. Two CMOS cameras were evaluated in our experiments. They were WebCamGoPlus (Creative), and EOS D30 (Canon). WebCamGoPlus is a low-priced web camera, whereas EOS D30 is for professional use. Image of a white plate were recorded with the cameras under the plate's luminance condition of 0cd/m2 and 150cd/m2. The recorded images were multiply integrated to reduce the random noise component. From the images of both cameras, characteristic dots patterns were observed. Some bright dots were observed in the dark images, whereas some dark dots were in the bright images. The results show that the camera identification method is also effective for CMOS cameras.

  5. Camera self-calibration from translation by referring to a known camera.

    PubMed

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  6. Christoph Scheiner and the camera obscura (German Title: Christoph Scheiner und die Camera obscura )

    NASA Astrophysics Data System (ADS)

    Daxecker, Franz

    A hitherto not noted portable camera obscura developed by Christoph Scheiner is documented with drawings. Furthermore a walkable camera obscura and the proof of the intersection of light rays caused by a pinhole are described, as well as the comparison between the camera obscura and the eye.

  7. Making a room-sized camera obscura

    NASA Astrophysics Data System (ADS)

    Flynt, Halima; Ruiz, Michael J.

    2015-01-01

    We describe how to convert a room into a camera obscura as a project for introductory geometrical optics. The view for our camera obscura is a busy street scene set against a beautiful mountain skyline. We include a short video with project instructions, ray diagrams and delightful moving images of cars driving on the road outside.

  8. Solid State Replacement of Rotating Mirror Cameras

    SciTech Connect

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  9. Thermal Cameras in School Laboratory Activities

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal cameras offer real-time visual access to otherwise invisible thermal phenomena, which are conceptually demanding for learners during traditional teaching. We present three studies of students' conduction of laboratory activities that employ thermal cameras to teach challenging thermal concepts in grades 4, 7 and 10-12. Visualization of…

  10. Cameras Monitor Spacecraft Integrity to Prevent Failures

    NASA Technical Reports Server (NTRS)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  11. Creating and Using a Camera Obscura

    ERIC Educational Resources Information Center

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material. Originally images were…

  12. Single chip camera active pixel sensor

    NASA Technical Reports Server (NTRS)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  13. An electronic multiband camera film viewer.

    NASA Technical Reports Server (NTRS)

    Roberts, L. H.

    1972-01-01

    An electronic viewer for real-time viewing and processing of multiband camera imagery is described. The Multiband Camera Film Viewer (MCFV) is a high-resolution, 1000-line system scanning three channels of multiband imagery. The MCFV provides a calibrated output from each of the three channels for viewing in composite true color, analog false-color, and digitized, enhanced false color.

  14. A BASIC CAMERA UNIT FOR MEDICAL PHOTOGRAPHY.

    PubMed

    SMIALOWSKI, A; CURRIE, D J

    1964-08-22

    A camera unit suitable for most medical photographic purposes is described. The unit comprises a single-lens reflex camera, an electronic flash unit and supplementary lenses. Simple instructions for use of th's basic unit are presented. The unit is entirely suitable for taking fine-quality photographs of most medical subjects by persons who have had little photographic training.

  15. Depth Estimation Using a Sliding Camera.

    PubMed

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm. PMID:26685238

  16. Depth Estimation Using a Sliding Camera.

    PubMed

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  17. Digital Cameras in the K-12 Classroom.

    ERIC Educational Resources Information Center

    Clark, Kenneth; Hosticka, Alice; Bedell, Jacqueline

    This paper discusses the use of digital cameras in K-12 education. Examples are provided of the integration of the digital camera and visual images into: reading and writing; science, social studies, and mathematics; projects; scientific experiments; desktop publishing; visual arts; data analysis; computer literacy; classroom atmosphere; and…

  18. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  19. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  20. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  1. AIM: Ames Imaging Module Spacecraft Camera

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  2. 21 CFR 886.1120 - Opthalmic camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED)...

  3. 21 CFR 886.1120 - Ophthalmic camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Ophthalmic camera. 886.1120 Section 886.1120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED)...

  4. 21 CFR 886.1120 - Opthalmic camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED)...

  5. 21 CFR 886.1120 - Ophthalmic camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Ophthalmic camera. 886.1120 Section 886.1120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED)...

  6. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  7. Flow visualization by mobile phone cameras

    NASA Astrophysics Data System (ADS)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  8. The Hyperspectral Stereo Camera Project

    NASA Astrophysics Data System (ADS)

    Griffiths, A. D.; Coates, A. J.

    2006-12-01

    The MSSL Hyperspectral Stereo Camera (HSC) is developed from Beagle2 stereo camera heritage. Replaceing filter wheels with liquid crystal tuneable filters (LCTF) turns each eye into a compact hyperspectral imager. Hyperspectral imaging is defined here as acquiring 10s-100s of images in 10-20 nm spectral bands. Combined together these bands form an image `cube' (with wavelength as the third dimension) allowing a detailed spectrum to be extracted at any pixel position. A LCTF is conceptually similar to the Fabry-Perot tuneable filter design but instead of physical separation, the variable refractive index of the liquid crystal etalons is used to define the wavelength of interest. For 10 nm bandwidths, LCTFs are available covering the 400-720 nm and 650-1100 nm ranges. The resulting benefits include reduced imager mechanical complexity, no limitation on the number of filter wavelengths available and the ability to change the wavelengths of interest in response to new findings as the mission proceeds. LCTFs are currently commercially available from two US companies - Scientific Solutions Inc. and Cambridge Research Inc. (CRI). CRI distribute the `Varispec' LCTFs used in the HSC. Currently, in Earth orbit hyperspectral imagers can prospect for minerals, detect camouflaged military equipment and determine the species and state of health of crops. Therefore, we believe this instrument shows great promise for a wide range of investigations in the planetary science domain (below). MSSL will integrate and test at representative Martian temperatures the HSC development model (to determine power requirements to prevent the liquid crystals freezing). Additionally, a full radiometric calibration is required to determine the HSC sensitivity. The second phase of the project is to demonstrate (in a ground based lab) the benefit of much higher spectral resolution to the following Martian scientific investigations: - Determination of the mineralogy of rocks and soil - Detection of

  9. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  10. NIR Camera/spectrograph: TEQUILA

    NASA Astrophysics Data System (ADS)

    Ruiz, E.; Sohn, E.; Cruz-Gonzalez, I.; Salas, L.; Parraga, A.; Torres, R.; Perez, M.; Cobos, F.; Tejada, C.; Iriarte, A.

    1998-11-01

    We describe the configuration and operation modes of the IR camera/spectrograph called TEQUILA, based on a 1024X1024 HgCdTe FPA (HAWAII). The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN$_2$ dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An optomechanical assembly cooled to -30oC that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provisions to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8 m Mexican Infrared-Optical Telescope (TIM).

  11. Smart Camera Technology Increases Quality

    NASA Technical Reports Server (NTRS)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  12. Cloud Computing with Context Cameras

    NASA Astrophysics Data System (ADS)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  13. True three-dimensional camera

    NASA Astrophysics Data System (ADS)

    Kornreich, Philipp; Farell, Bart

    2013-01-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by short photo-conducting lightguides at each pixel. In the eye the rods and cones are the fiber-like lightguides. The device uses ambient light that is only coherent in spherical shell-shaped light packets of thickness of one coherence length. Modern semiconductor technology permits the construction of lightguides shorter than a coherence length of ambient light. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel. Light frequency components in the packet arriving at a pixel through a convex lens add constructively only if the light comes from the object point in focus at this pixel. The light in packets from all other object points cancels. Thus the pixel receives light from one object point only. The lightguide has contacts along its length. The lightguide charge carriers are generated by the light patterns. These light patterns, and thus the photocurrent, shift in response to the phase of the input signal. Thus, the photocurrent is a function of the distance from the pixel to its object point. Applications include autonomous vehicle navigation and robotic vision. Another application is a crude teleportation system consisting of a camera and a three-dimensional printer at a remote location.

  14. Autoconfiguration of a dynamic nonoverlapping camera network.

    PubMed

    Junejo, Imran N; Cao, Xiaochun; Foroosh, Hassan

    2007-08-01

    In order to monitor sufficiently large areas of interest for surveillance or any event detection, we need to look beyond stationary cameras and employ an automatically configurable network of nonoverlapping cameras. These cameras need not have an overlapping field of view and should be allowed to move freely in space. Moreover, features like zooming in/out, readily available in security cameras these days, should be exploited in order to focus on any particular area of interest if needed. In this paper, a practical framework is proposed to self-calibrate dynamically moving and zooming cameras and determine their absolute and relative orientations, assuming that their relative position is known. A global linear solution is presented for self-calibrating each zooming/focusing camera in the network. After self-calibration, it is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the dynamic network configuration. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic, as well as on real data.

  15. Automatic camera tracking for remote manipulators

    SciTech Connect

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables.

  16. Automatic camera tracking for remote manipulators

    SciTech Connect

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2/sup 0/ deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables.

  17. Sky camera geometric calibration using solar observations

    NASA Astrophysics Data System (ADS)

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-01

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. The performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. Calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.

  18. Multi-cameras calibration from spherical targets

    NASA Astrophysics Data System (ADS)

    Zhao, Chengyun; Zhang, Jin; Deng, Huaxia; Yu, Liandong

    2016-01-01

    Multi-cameras calibration using spheres is more convenient than using planar target because it has an obvious advantage in imaging in different angles. The internal and external parameters of multi-cameras can be obtained through once calibrat ion. In this paper, a novel mult i-cameras calibration method is proposed based on multiple spheres. A calibration target with fixed multiple balls is applied in this method and the geometric propert ies of the sphere projection model will be analyzed. During the experiment, the spherical target is placed in the public field of mult i-cameras system. Then the corresponding data can be stored when the cameras are triggered by signal generator. The contours of the balls are detected by Hough transform and the center coordinates are determined with sub-pixel accuracy. Then the center coordinates are used as input information for calibrat ion and the internal as well as external parameters can be calculated by Zhang's theory. When mult iple cameras are calibrated simultaneously from different angles using multiple spheres, the center coordinates of each sphere can be determined accurately even the target images taken out of focus. So this method can improve the calibration precision. Meanwhile, Zhang's plane template method is added to the contrast calibrat ion experiment. And the error sources of the experiment are analyzed. The results indicate that the method proposed in this paper is suitable for mult i-cameras calibrat ion.

  19. Thermal characterization of a NIR hyperspectral camera

    NASA Astrophysics Data System (ADS)

    Parra, Francisca; Meza, Pablo; Pezoa, Jorge E.; Torres, Sergio N.

    2011-11-01

    The accuracy achieved by applications employing hyperspectral data collected by hyperspectral cameras depends heavily on a proper estimation of the true spectral signal. Beyond question, a proper knowledge about the sensor response is key in this process. It is argued here that the common first order representation for hyperspectral NIR sensors does not represent accurately their thermal wavelength-dependent response, hence calling for more sophisticated and precise models. In this work, a wavelength-dependent, nonlinear model for a near infrared (NIR) hyperspectral camera is proposed based on its experimental characterization. Experiments have shown that when temperature is used as the input signal, the camera response is almost linear at low wavelengths, while as the wavelength increases the response becomes exponential. This wavelength-dependent behavior is attributed to the nonlinear responsivity of the sensors in the NIR spectrum. As a result, the proposed model considers different nonlinear input/output responses, at different wavelengths. To complete the representation, both the nonuniform response of neighboring detectors in the camera and the time varying behavior of the input temperature have also been modeled. The experimental characterization and the proposed model assessment have been conducted using a NIR hyperspectral camera in the range of 900 to 1700 [nm] and a black body radiator source. The proposed model was utilized to successfully compensate for both: (i) the nonuniformity noise inherent to the NIR camera, and (ii) the stripping noise induced by the nonuniformity and the scanning process of the camera while rendering hyperspectral images.

  20. A solid state streak camera

    NASA Astrophysics Data System (ADS)

    Kleinfelder, Stuart; Kwiatkowski, Kris; Shah, Ashish

    2005-03-01

    A monolithic solid-state streak camera has been designed and fabricated in a standard 0.35 μm, 3.3V, thin-oxide digital CMOS process. It consists of a 1-D linear array of 150 integrated photodiodes, followed by fast analog buffers and on-chip, 150-deep analog frame storage. Each pixel's front-end consists of an n-diffusion / p-well photodiode, with fast complementary reset transistors, and a source-follower buffer. Each buffer drives a line of 150 sample circuits per pixel, with each sample circuit consisting of an n-channel sample switch, a 0.1 pF double-polysilicon sample capacitor, a reset switch to definitively clear the capacitor, and a multiplexed source-follower readout buffer. Fast on-chip sample clock generation was designed using a self-timed break-before-make operation that insures the maximum time for sample settling. The electrical analog bandwidth of each channels buffer and sampling circuits was designed to exceed 1 GHz. Sampling speeds of 400 M-frames/s have been achieved using electrical input signals. Operation with optical input signals has been demonstrated at 100 MHz sample rates. Sample output multiplexing allows the readout of all 22,500 samples (150 pixels times 150 samples per pixel) in about 3 ms. The chip"s output range was a maximum of 1.48 V on a 3.3V supply voltage, corresponding to a maximum 2.55 V swing at the photodiode. Time-varying output noise was measured to be 0.51 mV, rms, at 100 MHz, for a dynamic range of ~11.5 bits, rms. Circuit design details are presented, along with the results of electrical measurements and optical experiments with fast pulsed laser light sources at several wavelengths.

  1. 6. VAL CAMERA STATION, VIEW FROM INTERIOR OUT OF WINDOW ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA STATION, VIEW FROM INTERIOR OUT OF WINDOW OPENING TOWARD VAL FIRING RANGE LOOKING SOUTHEAST WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  2. 8. VAL CAMERA CAR, CLOSEUP VIEW OF 'FLARE' OR TRAJECTORY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VAL CAMERA CAR, CLOSE-UP VIEW OF 'FLARE' OR TRAJECTORY CAMERA ON SLIDING MOUNT. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  3. Two degree of freedom camera mount

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O. (Inventor)

    2003-01-01

    A two degree of freedom camera mount. The camera mount includes a socket, a ball, a first linkage and a second linkage. The socket includes an interior surface and an opening. The ball is positioned within an interior of the socket. The ball includes a coupling point for rotating the ball relative to the socket and an aperture for mounting a camera. The first and second linkages are rotatably connected to the socket and slidably connected to the coupling point of the ball. Rotation of the linkages with respect to the socket causes the ball to rotate with respect to the socket.

  4. Determining camera parameters for round glassware measurements

    NASA Astrophysics Data System (ADS)

    Baldner, F. O.; Costa, P. B.; Gomes, J. F. S.; Filho, D. M. E. S.; Leta, F. R.

    2015-01-01

    Nowadays there are many types of accessible cameras, including digital single lens reflex ones. Although these cameras are not usually employed in machine vision applications, they can be an interesting choice. However, these cameras have many available parameters to be chosen by the user and it may be difficult to select the best of these in order to acquire images with the needed metrological quality. This paper proposes a methodology to select a set of parameters that will supply a machine vision system with the needed quality image, considering the measurement required of a laboratory glassware.

  5. New Modular Camera No Ordinary Joe

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Although dubbed 'Little Joe' for its small-format characteristics, a new wavefront sensor camera has proved that it is far from coming up short when paired with high-speed, low-noise applications. SciMeasure Analytical Systems, Inc., a provider of cameras and imaging accessories for use in biomedical research and industrial inspection and quality control, is the eye behind Little Joe's shutter, manufacturing and selling the modular, multi-purpose camera worldwide to advance fields such as astronomy, neurobiology, and cardiology.

  6. Nanolithography based on an atom pinhole camera.

    PubMed

    Melentiev, P N; Zablotskiy, A V; Lapshin, D A; Sheshin, E P; Baturin, A S; Balykin, V I

    2009-06-10

    In modern experimental physics the pinhole camera is used when the creation of a focusing element (lens) is difficult. We have experimentally realized a method of image construction in atom optics, based on the idea of an optical pinhole camera. With the use of an atom pinhole camera we have built an array of identical arbitrary-shaped atomic nanostructures with the minimum size of an individual nanostructure element down to 30 nm on an Si surface. The possibility of 30 nm lithography by means of atoms, molecules and clusters has been shown.

  7. Fuzzy logic control for camera tracking system

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  8. Camera-based driver assistance systems

    NASA Astrophysics Data System (ADS)

    Grimm, Michael

    2013-04-01

    In recent years, camera-based driver assistance systems have taken an important step: from laboratory setup to series production. This tutorial gives a brief overview on the technology behind driver assistance systems, presents the most significant functionalities and focuses on the processes of developing camera-based systems for series production. We highlight the critical points which need to be addressed when camera-based driver assistance systems are sold in their thousands, worldwide - and the benefit in terms of safety which results from it.

  9. Task Panel Sensing with a Movable Camera

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Mathis, Donald W.; Magee, Michael; Hoff, William A.

    1990-03-01

    This paper discusses the integration of model based computer vision with a robot planning system. The vision system deals with structured objects with several movable parts (the "Task Panel"). The robot planning system controls a T3-746 manipulator that has a gripper and a wrist mounted camera. There are two control functions: move the gripper into position for manipulating the panel fixtures (doors, latches, etc.), and move the camera into positions preferred by the vision system. This paper emphasizes the issues related to repositioning the camera for improved viewpoints.

  10. Further results from a trial comparing a hidden speed camera programme with visible camera operation.

    PubMed

    Keall, Michael D; Povey, Lynley J; Frith, William J

    2002-11-01

    As described in a previous paper [Accident Anal. Prev., 33 (2001) 277], the hidden camera programme was found to be associated with significant net falls in speeds, crashes and casualties both in 'speed camera areas' (specific signed sites to which camera operation is restricted) and on 100 km/h speed limit roads generally. These changes in speeds, crashes and casualties were identified in the trial area in comparison with a control area where generally highly visible speed camera enforcement continued to be used (and was used in the trial area prior to the commencement of the trial). There were initial changes in public attitudes associated with the trial that later largely reverted to pre-trial levels. Analysis of 2 years' data of the trial showed that falls in crash and casualty rates and speeds associated with the hidden camera programme were being sustained. It is not possible to separate out the effects of the concealment of the cameras from other aspects of the hidden speed camera programme, such as the four-fold increase in ticketing. This increase in speed camera tickets issued was an expected consequence of hiding the cameras and as such, an integral part of the hidden camera programme being evaluated.

  11. The Simple Camera in School Counseling

    ERIC Educational Resources Information Center

    Schudson, Karen Rubin

    1975-01-01

    This article develops the concept of photo counseling and places special emphasis on the school counselor's use of the still camera. Three clinical examples highlight the variety of situations in which photography can be instrumental in school counseling. (Author)

  12. Increase in the Array Television Camera Sensitivity

    NASA Astrophysics Data System (ADS)

    Shakhrukhanov, O. S.

    A simple adder circuit for successive television frames that enables to considerably increase the sensitivity of such radiation detectors is suggested by the example of array television camera QN902K.

  13. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  14. NASA Camera Catches Moon 'Photobombing' Earth

    NASA Video Gallery

    On July 5, 2016, the moon passed between NOAA's DSCOVR satellite and Earth. NASA's EPIC camera aboard DSCOVR snapped these images over a period of about four hours. In this set, the far side of the...

  15. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  16. Newman Waves at Camera from Unity Module

    NASA Technical Reports Server (NTRS)

    1998-01-01

    STS-88 mission specialist James Newman, holding on to a handrail, waves back at the camera during the first of three Extravehicular activities(EVAs) performed during the mission. The orbiter can be seen reflected in his visor

  17. FORCAST Camera Installed on SOFIA Telescope

    NASA Video Gallery

    Cornell University's Faint Object Infrared Camera for the SOFIA Telescope, or FORCAST, being installed on the Stratospheric Observatory for Infrared Astronomy's 2.5-meter telescope in preparation f...

  18. Innovative camera system developed for Sprint vehicle

    SciTech Connect

    Not Available

    1985-04-01

    A new inspection system for the Sprint 101 ROV eliminates parallax errors because all three camera modules use a single lens for viewing. Parallax is the apparent displacement of an object when it is viewed from two points not in the same line of sight. The central camera is a Pentax 35-mm single lens reflex with a 28-mm lens. It comes with 250-shot film cassettes, an automatic film wind-on, and a data chamber display. An optical transfer assembly on the stills camera viewfinder transmits the image to one of the two video camera modules. The video picture transmitted to the surface is exactly the same as the stills photo. The surface operator can adjust the focus by viewing the video display.

  19. Daytime Aspect Camera for Balloon Altitudes

    NASA Technical Reports Server (NTRS)

    Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.

    2002-01-01

    We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40 km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600- to 1000-nm region of the spectrum, successfully provides daytime aspect information of approx. 10 arcsec resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models used to design the camera, but the daytime stellar magnitude limit was lower than expected due to longitudinal chromatic aberration in the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.

  20. Vacuum compatible miniature CCD camera head

    DOEpatents

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  1. Advanced Pointing Imaging Camera (APIC) Concept

    NASA Astrophysics Data System (ADS)

    Park, R. S.; Bills, B. G.; Jorgensen, J.; Jun, I.; Maki, J. N.; McEwen, A. S.; Riedel, E.; Walch, M.; Watkins, M. M.

    2016-10-01

    The Advanced Pointing Imaging Camera (APIC) concept is envisioned as an integrated system, with optical bench and flight-proven components, designed for deep-space planetary missions with 2-DOF control capability.

  2. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  3. Dilation framing camera with 4 ps resolution

    NASA Astrophysics Data System (ADS)

    Cai, Houzhi; Zhao, Xin; Liu, Jinyuan; Xie, Weixin; Bai, Yanli; Lei, Yunfei; Liao, Yubo; Niu, Hanben

    2016-04-01

    A framing camera using pulse-dilation technology is reported in this article. The camera uses pulse dilation of an electron signal from a pulsed photo-cathode (PC) to achieve high temporal resolution. While the PC is not pulsed, the measured temporal resolution of the camera without pulse-dilation is about 71 ps. While the excitation pulse is applied on the PC, the measured temporal resolution is improved to 4 ps by using the pulse-dilation technology. The spatial resolution of the dilation framing camera is also measured, which is better than 100 μm. The relationship between the temporal resolution and the PC bias voltage is obtained. The variation of the temporal resolution with the gradient of the PC excitation pulse is also provided.

  4. Compact stereo endoscopic camera using microprism arrays.

    PubMed

    Yang, Sung-Pyo; Kim, Jae-Jun; Jang, Kyung-Won; Song, Weon-Kook; Jeong, Ki-Hun

    2016-03-15

    This work reports a microprism array (MPA) based compact stereo endoscopic camera with a single image sensor. The MPAs were monolithically fabricated by using two-step photolithography and geometry-guided resist reflow to form an appropriate prism angle for stereo image pair formation. The fabricated MPAs were transferred onto a glass substrate with a UV curable resin replica by using polydimethylsiloxane (PDMS) replica molding and then successfully integrated in front of a single camera module. The stereo endoscopic camera with MPA splits an image into two stereo images and successfully demonstrates the binocular disparities between the stereo image pairs for objects with different distances. This stereo endoscopic camera can serve as a compact and 3D imaging platform for medical, industrial, or military uses.

  5. Action selection for single-camera SLAM.

    PubMed

    Vidal-Calleja, Teresa A; Sanfeliu, Alberto; Andrade-Cetto, Juan

    2010-12-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionally, the system has been ported to a mobile robotic platform, thus closing the control-estimation loop. To show the viability of the approach, simulations and experiments are presented for the unconstrained motion of a handheld camera and for the motion of a mobile robot with nonholonomic constraints. When combined with a path planner, the technique safely drives to a marked goal while, at the same time, producing an optimal estimated map.

  6. Action selection for single-camera SLAM.

    PubMed

    Vidal-Calleja, Teresa A; Sanfeliu, Alberto; Andrade-Cetto, Juan

    2010-12-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionally, the system has been ported to a mobile robotic platform, thus closing the control-estimation loop. To show the viability of the approach, simulations and experiments are presented for the unconstrained motion of a handheld camera and for the motion of a mobile robot with nonholonomic constraints. When combined with a path planner, the technique safely drives to a marked goal while, at the same time, producing an optimal estimated map. PMID:20350845

  7. Aviation spectral camera infinity target simulation system

    NASA Astrophysics Data System (ADS)

    Liu, Xinyue; Ming, Xing; Liu, Jiu; Guo, Wenji; Lv, Gunbo

    2014-11-01

    With the development of science and technology, the applications of aviation spectral camera becoming more widely. Developing a test system of dynamic target is more important. Aviation spectral camera infinity target simulation system can be used to test the resolution and the modulation transfer function of camera. The construction and work principle of infinity target simulation system were introduced in detail. Dynamic target generator based digital micromirror device (DMD) and required performance of collimation System were analyzed and reported. The dynamic target generator based on DMD had the advantages of replacing image convenient, size small and flexible. According to the requirement of tested camera, by rotating and moving mirror, has completed a full field infinity dynamic target test plan.

  8. Selecting the Right Camera for Your Desktop.

    ERIC Educational Resources Information Center

    Rhodes, John

    1997-01-01

    Provides an overview of camera options and selection criteria for desktop videoconferencing. Key factors in image quality are discussed, including lighting, resolution, and signal-to-noise ratio; and steps to improve image quality are suggested. (LRW)

  9. A stereoscopic lens for digital cinema cameras

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  10. CMOS Camera Array With Onboard Memory

    NASA Technical Reports Server (NTRS)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  11. Mercuric iodide X-ray camera

    NASA Astrophysics Data System (ADS)

    Patt, B. E.; del Duca, A.; Dolin, R.; Ortale, C.

    1986-02-01

    A prototype X-ray camera utilizing a 1.5- by 1.5-in., 1024-element, thin mercuric iodide detector array has been tested and evaluated. The microprocessor-based camera is portable and operates at room temperature. Events can be localized within 1-2 mm at energies below 60 keV and within 5-6 mm at energies on the order of 600 keV.

  12. Positioning a Camera for Mars Reconnaissance Orbiter

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Workers at Lockheed Martin Space Systems, Denver, position a telescopic camera for installation onto NASA's Mars Reconnaissance Orbiter spacecraft on Dec. 11, 2004. Ball Aerospace and Technology Corp., Boulder, Colo., built this camera, called the High Resolution Imaging Science Experiment, or HiRISE, for the University of Arizona, Tucson, to supply for the mission. The orbiter is scheduled for launch in August 2005 carrying six science instruments.

  13. The CCD cameras of RATS project.

    NASA Astrophysics Data System (ADS)

    Scuderi, S.; Claudi, R. U.; Favata, F.; Bonanno, G.; Bruno, P.; Cosentino, R.; Belluso, M.; Calí, A.; Timpanaro, M. C.; Chiomento, V.; Farisato, G.; Frigo, A.; Gianesini, G.; Traverso, L.; Rebeschini, M.; Strazzabosco, D.

    We report on the characteristics and the performances of the CCD cameras that will be used by the project RATS (RAdial velocity and Transit Search) that is an italian-ESA collaboration whose main goal is the search of extrasolar planet using the transit method. We describe the characteristics of the variuos cameras and the first tests at the Asiago Schmidt telescope at Cima Ekar.

  14. Pinhole Cameras: For Science, Art, and Fun!

    ERIC Educational Resources Information Center

    Button, Clare

    2007-01-01

    A pinhole camera is a camera without a lens. A tiny hole replaces the lens, and light is allowed to come in for short amount of time by means of a hand-operated shutter. The pinhole allows only a very narrow beam of light to enter, which reduces confusion due to scattered light on the film. This results in an image that is focused, reversed, and…

  15. Development of the TopSat camera

    NASA Astrophysics Data System (ADS)

    Greenway, Paul; Tosh, Ian; Morris, Nigel

    2004-06-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. An engineering model development programme verified optical alignment techniques and crucially, demonstrated structural stability through vibration tests. As a result of this, the flight model camera has been assembled at the Space Science & Technology Department of CCLRC's Rutherford Appleton Laboratory in the UK, in preparation for launch in 2005. The camera has been designed to be compact and lightweight so that it may be flown on a low cost mini-satellite (~120kg launch mass). To achieve this, the camera utilises an off-axis three mirror anastigmatic (TMA) system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; RAL (Rutherford Appleton Laboratory), SSTL (Surrey Satellite Technology Ltd.), QinetiQ and Infoterra. Its objective is to demonstrate provision of rapid response high-resolution imagery to fixed and mobile ground stations using a low cost mini-satellite. This paper describes the opto-mechanical design, assembly and alignment techniques implemented and reports on the test results obtained to date.

  16. Shortwave infrared camera with extended spectral sensitivity

    NASA Astrophysics Data System (ADS)

    Gerken, Martin; Achtner, Bertram; Kraus, Michael; Neumann, Tanja; Münzberg, Mario

    2012-06-01

    The shortwave infrared spectral range (SWIR) has certain advantages for the observation during day under fog and haze weather conditions. Due to the longer wavelength compared to the visible spectrum the range performances in the SWIR is here considerably extended. In addition cooled SWIR focal plane arrays reach in the meantime sensitivities to be useable for night viewing under twilight or moon light conditions. The presented SWIR camera system combines the color imaging in the visible spectrum with the imaging in the SWIR spectrum. The 20x zoom optics is fully corrected between 440 nm and 1700 nm. A dichroic beam splitter projects the visible spectrum on a color chip with HDTV resolution and the SWIR spectrum on a 640x512 InGaAs focal plane array. The open architecture of the camera system allows the use of different SWIR sensors and CMOS sensors. A universal designed interface electronic operates the used cameras and provides standard video outputs and compressed video streams on an ethernet interface. The camera system is designed to be integrated in various stabilized platforms. The camera concept is described and the comparison with pure SWIR or combined SWIR / MWIR dual band cameras are discussed from an application and system point of view.

  17. Low light performance of digital cameras

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror; Hertel, Dirk

    2009-01-01

    Photospace data previously measured on large image sets have shown that a high percentage of camera phone pictures are taken under low-light conditions. Corresponding image quality measurements linked the lowest quality to these conditions, and subjective analysis of image quality failure modes identified image blur as the most important contributor to image quality degradation. Camera phones without flash have to manage a trade-off when adjusting shutter time to low-light conditions. The shutter time has to be long enough to avoid extreme underexposures, but not short enough that hand-held picture taking is still possible without excessive motion blur. There is still a lack of quantitative data on motion blur. Camera phones often do not record basic operating parameters such as shutter speed in their image metadata, and when recorded, the data are often inaccurate. We introduce a device and process for tracking camera motion and measuring its Point Spread Function (PSF). Vision-based metrics are introduced to assess the impact of camera motion on image quality so that the low-light performance of different cameras can be compared. Statistical distributions of user variability will be discussed.

  18. 3D astigmatic depth sensing camera

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.; Tyo, J. Scott; Schwiegerling, Jim

    2011-10-01

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture threedimensional images inexpensively and without major modifications to current cameras is uncommon. Our goal is to create a modification to a common commercial camera that allows a three dimensional reconstruction. We desire such an imaging system to be inexpensive and easy to use. Furthermore, we require that any three-dimensional modification to a camera does not reduce its resolution. Here we present a possible solution to this problem. A commercial digital camera is used with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of the projected pattern, thereby encoding depth. This projector could be integrated into the flash unit of the camera. By carefully choosing a pattern we are able to exploit this differential focus in image processing. Wavelet transforms are performed on the image that pick out the projected pattern. By taking ratios of certain wavelet coefficients we are able to correlate the distance an object at a particular transverse position is from the camera to the contrast ratios. We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  19. Calibration of multi-camera photogrammetric systems

    NASA Astrophysics Data System (ADS)

    Detchev, I.; Mazaheri, M.; Rondeel, S.; Habib, A.

    2014-11-01

    Due to the low-cost and off-the-shelf availability of consumer grade cameras, multi-camera photogrammetric systems have become a popular means for 3D reconstruction. These systems can be used in a variety of applications such as infrastructure monitoring, cultural heritage documentation, biomedicine, mobile mapping, as-built architectural surveys, etc. In order to ensure that the required precision is met, a system calibration must be performed prior to the data collection campaign. This system calibration should be performed as efficiently as possible, because it may need to be completed many times. Multi-camera system calibration involves the estimation of the interior orientation parameters of each involved camera and the estimation of the relative orientation parameters among the cameras. This paper first reviews a method for multi-camera system calibration with built-in relative orientation constraints. A system stability analysis algorithm is then presented which can be used to assess different system calibration outcomes. The paper explores the required calibration configuration for a specific system in two situations: major calibration (when both the interior orientation parameters and relative orientation parameters are estimated), and minor calibration (when the interior orientation parameters are known a-priori and only the relative orientation parameters are estimated). In both situations, system calibration results are compared using the system stability analysis methodology.

  20. CCD camera system for cometary research

    NASA Astrophysics Data System (ADS)

    Oliversen, R. J.

    1988-08-01

    The objective is to upgrade the NASA/GSFC 36 inch telescope instrumentation, primarily with a new charge coupled device (CCD) camera system, to permit an effective monitoring program of cometary activity by means of narrowband imaging and spectroscopic techniques. Researchers have twice taken delivery of the CCD camera system from Princeton Scientific Instruments and twice returned it within six weeks for repair. During the times they had the camera system in the lab, they measured the instrumental performance of the TEK 512 x 512 CCD chip (e.g., readout noise, dark current, etc) and developed the complete operational software for the camera system plus several useful observing and data reduction routines for use at the telescope. The CCD camera system is controlled by an IBM-AT computer. The peripheral equipment and software to permit the efficient transfer of large amounts of data to the LASP's computers (VAXs) and subsequent timely reductions are also in place. The Io torus (S II) emission was monitored with a Fabry-Perot scanning spectrometer, in conjunction with the International Jupiter Watch. The CCD camera system will be coupled to a narrowband interference filter imager and a long-slit spectrograph to provide regular and well-calibrated spatial and spectral observations of comets.

  1. Calibration of cameras with radially symmetric distortion.

    PubMed

    Tardif, Jean-Philippe; Sturm, Peter; Trudeau, Martin; Roy, Sébastien

    2009-09-01

    We present algorithms for plane-based calibration of general radially distorted cameras. By this, we understand cameras that have a distortion center and an optical axis such that the projection rays of pixels lying on a circle centered on the distortion center form a right viewing cone centered on the optical axis. The camera is said to have a single viewpoint (SVP) if all such viewing cones have the same apex (the optical center); otherwise, we speak of NSVP cases. This model encompasses the classical radial distortion model [5], fisheyes, and most central or noncentral catadioptric cameras. Calibration consists in the estimation of the distortion center, the opening angles of all viewing cones, and their optical centers. We present two approaches of computing a full calibration from dense correspondences of a single or multiple planes with known euclidean structure. The first one is based on a geometric constraint linking viewing cones and their intersections with the calibration plane (conic sections). The second approach is a homography-based method. Experiments using simulated and a broad variety of real cameras show great stability. Furthermore, we provide a comparison with Hartley-Kang's algorithm [12], which, however, cannot handle such a broad variety of camera configurations, showing similar performance.

  2. Calibration Procedures in Mid Format Camera Setups

    NASA Astrophysics Data System (ADS)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  3. New developments to improve SO2 cameras

    NASA Astrophysics Data System (ADS)

    Luebcke, P.; Bobrowski, N.; Hoermann, C.; Kern, C.; Klein, A.; Kuhn, J.; Vogel, L.; Platt, U.

    2012-12-01

    The SO2 camera is a remote sensing instrument that measures the two-dimensional distribution of SO2 (column densities) in volcanic plumes using scattered solar radiation as a light source. From these data SO2-fluxes can be derived. The high time resolution of the order of 1 Hz allows correlating SO2 flux measurements with other traditional volcanological measurement techniques, i.e., seismology. In the last years the application of SO2 cameras has increased, however, there is still potential to improve the instrumentation. First of all, the influence of aerosols and ash in the volcanic plume can lead to large errors in the calculated SO2 flux, if not accounted for. We present two different concepts to deal with the influence of ash and aerosols. The first approach uses a co-axial DOAS system that was added to a two filter SO2 camera. The camera used Filter A (peak transmission centred around 315 nm) to measures the optical density of SO2 and Filter B (centred around 330 nm) to correct for the influence of ash and aerosol. The DOAS system simultaneously performs spectroscopic measurements in a small area of the camera's field of view and gives additional information to correct for these effects. Comparing the optical densities for the two filters with the SO2 column density from the DOAS allows not only a much more precise calibration, but also to draw conclusions about the influence from ash and aerosol scattering. Measurement examples from Popocatépetl, Mexico in 2011 are shown and interpreted. Another approach combines the SO2 camera measurement principle with the extremely narrow and periodic transmission of a Fabry-Pérot interferometer. The narrow transmission window allows to select individual SO2 absorption bands (or series of bands) as a substitute for Filter A. Measurements are therefore more selective to SO2. Instead of Filter B, as in classical SO2 cameras, the correction for aerosol can be performed by shifting the transmission window of the Fabry

  4. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  5. Contrail study with ground-based cameras

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Hempel, R.; Flentje, H.; Garhammer, M.; Graf, K.; Kox, S.; Lösslein, H.; Mayer, B.

    2013-08-01

    Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle < 90° or whole-sky imager) at the ground at various positions are used to track contrails and to derive their altitude, width, and horizontal speed. Camera models for both types are described to derive the observation angles for given image coordinates and their inverse. The models are calibrated with sightings of the Sun, the Moon and a few bright stars. The methods are applied and tested in a case study. Four persistent contrails crossing each other together with a short-lived one are observed with the cameras. Vertical and horizontal positions of the contrails are determined from the camera images to an accuracy of better than 200 m and horizontal speed to 0.2 m s-1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP) data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  6. Contrail study with ground-based cameras

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Hempel, R.; Flentje, H.; Garhammer, M.; Graf, K.; Kox, S.; Lösslein, H.; Mayer, B.

    2013-12-01

    Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle < 90° or whole-sky imager) at the ground at various positions are used to track contrails and to derive their altitude, width, and horizontal speed. Camera models for both types are described to derive the observation angles for given image coordinates and their inverse. The models are calibrated with sightings of the Sun, the Moon and a few bright stars. The methods are applied and tested in a case study. Four persistent contrails crossing each other, together with a short-lived one, are observed with the cameras. Vertical and horizontal positions of the contrails are determined from the camera images to an accuracy of better than 230 m and horizontal speed to 0.2 m s-1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP) data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails, suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  7. 16 CFR 1025.45 - In camera materials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false In camera materials. 1025.45 Section 1025.45... PROCEEDINGS Hearings § 1025.45 In camera materials. (a) Definition. In camera materials are documents... excluded from the public record. (b) In camera treatment of documents and testimony. The Presiding...

  8. How to Build Your Own Document Camera for around $100

    ERIC Educational Resources Information Center

    Van Orden, Stephen

    2010-01-01

    Document cameras can have great utility in second language classrooms. However, entry-level consumer document cameras start at around $350. This article describes how the author built three document cameras and offers suggestions for how teachers can successfully build their own quality document camera using a webcam for around $100.

  9. Design of Endoscopic Capsule With Multiple Cameras.

    PubMed

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW. PMID:25376042

  10. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. PMID:26500051

  11. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  12. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  13. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  14. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  15. Camera settings for UAV image acquisition

    NASA Astrophysics Data System (ADS)

    O'Connor, James; Smith, Mike J.; James, Mike R.

    2016-04-01

    The acquisition of aerial imagery has become more ubiquitous than ever in the geosciences due to the advent of consumer-grade UAVs capable of carrying imaging devices. These allow the collection of high spatial resolution data in a timely manner with little expertise. Conversely, the cameras/lenses used to acquire this imagery are often given less thought, and can be unfit for purpose. Given weight constraints which are frequently an issue with UAV flights, low-payload UAVs (<1 kg) limit the types of cameras/lenses which could potentially be used for specific surveys, and therefore the quality of imagery which can be acquired. This contribution discusses these constraints, which need to be considered when selecting a camera/lens for conducting a UAV survey and how they can best be optimized. These include balancing of the camera exposure triangle (ISO, Shutter speed, Aperture) to ensure sharp, well exposed imagery, and its interactions with other camera parameters (Sensor size, Focal length, Pixel pitch) as well as UAV flight parameters (height, velocity).

  16. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition.

  17. Camera Calibration with Radial Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  18. Phase camera experiment for Advanced Virgo

    NASA Astrophysics Data System (ADS)

    Agatsuma, Kazuhiro; van Beuzekom, Martin; van der Schaaf, Laura; van den Brand, Jo

    2016-07-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance.

  19. Managing a large database of camera fingerprints

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica; Filler, Tomáš

    2010-01-01

    Sensor fingerprint is a unique noise-like pattern caused by slightly varying pixel dimensions and inhomogeneity of the silicon wafer from which the sensor is made. The fingerprint can be used to prove that an image came from a specific digital camera. The presence of a camera fingerprint in an image is usually established using a detector that evaluates cross-correlation between the fingerprint and image noise. The complexity of the detector is thus proportional to the number of pixels in the image. Although computing the detector statistic for a few megapixel image takes several seconds on a single-processor PC, the processing time becomes impractically large if a sizeable database of camera fingerprints needs to be searched through. In this paper, we present a fast searching algorithm that utilizes special "fingerprint digests" and sparse data structures to address several tasks that forensic analysts will find useful when deploying camera identification from fingerprints in practice. In particular, we develop fast algorithms for finding if a given fingerprint already resides in the database and for determining whether a given image was taken by a camera whose fingerprint is in the database.

  20. Mobile phone camera benchmarking: combination of camera speed and image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  1. Analysis of Camera Parameters Value in Various Object Distances Calibration

    NASA Astrophysics Data System (ADS)

    Razali Yusoff, Ahmad; Farid Mohd Ariff, Mohd; Idris, Khairulnizam M.; Majid, Zulkepli; Setan, Halim; Chong, Albert K.

    2014-02-01

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances.

  2. Calibration method for a central catadioptric-perspective camera system.

    PubMed

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  3. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  4. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  5. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  6. 1D fast coded aperture camera.

    PubMed

    Haw, Magnus; Bellan, Paul

    2015-04-01

    A fast (100 MHz) 1D coded aperture visible light camera has been developed as a prototype for imaging plasma experiments in the EUV/X-ray bands. The system uses printed patterns on transparency sheets as the masked aperture and an 80 channel photodiode array (9 V reverse bias) as the detector. In the low signal limit, the system has demonstrated 40-fold increase in throughput and a signal-to-noise gain of ≈7 over that of a pinhole camera of equivalent parameters. In its present iteration, the camera can only image visible light; however, the only modifications needed to make the system EUV/X-ray sensitive are to acquire appropriate EUV/X-ray photodiodes and to machine a metal masked aperture. PMID:25933861

  7. A Daytime Aspect Camera for Balloon Altitudes

    NASA Technical Reports Server (NTRS)

    Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.; Six, N. Frank (Technical Monitor)

    2001-01-01

    We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600-1000 nm region of the spectrum, successfully provided daytime aspect information of approximately 10 arcsecond resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models, but the daytime stellar magnitude limit was lower than expected due to dispersion of red light by the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.

  8. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  9. Mechanical Design of the LSST Camera

    SciTech Connect

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; Ku, John; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  10. 1D fast coded aperture camera.

    PubMed

    Haw, Magnus; Bellan, Paul

    2015-04-01

    A fast (100 MHz) 1D coded aperture visible light camera has been developed as a prototype for imaging plasma experiments in the EUV/X-ray bands. The system uses printed patterns on transparency sheets as the masked aperture and an 80 channel photodiode array (9 V reverse bias) as the detector. In the low signal limit, the system has demonstrated 40-fold increase in throughput and a signal-to-noise gain of ≈7 over that of a pinhole camera of equivalent parameters. In its present iteration, the camera can only image visible light; however, the only modifications needed to make the system EUV/X-ray sensitive are to acquire appropriate EUV/X-ray photodiodes and to machine a metal masked aperture.

  11. Single-Camera Panoramic-Imaging Systems

    NASA Technical Reports Server (NTRS)

    Lindner, Jeffrey L.; Gilbert, John

    2007-01-01

    Panoramic detection systems (PDSs) are developmental video monitoring and image-data processing systems that, as their name indicates, acquire panoramic views. More specifically, a PDS acquires images from an approximately cylindrical field of view that surrounds an observation platform. The main subsystems and components of a basic PDS are a charge-coupled- device (CCD) video camera and lens, transfer optics, a panoramic imaging optic, a mounting cylinder, and an image-data-processing computer. The panoramic imaging optic is what makes it possible for the single video camera to image the complete cylindrical field of view; in order to image the same scene without the benefit of the panoramic imaging optic, it would be necessary to use multiple conventional video cameras, which have relatively narrow fields of view.

  12. Camera placement in integer lattices (extended abstract)

    NASA Astrophysics Data System (ADS)

    Pocchiola, Michel; Kranakis, Evangelos

    1990-09-01

    Techniques for studying an art gallery problem (the camera placement problem) in the infinite lattice (L sup d) of d tuples of integers are considered. A lattice point A is visible from a camera C positioned at a vertex of (L sup d) if A does not equal C and if the line segment joining A and C crosses no other lattice vertex. By using a combination of probabilistic, combinatorial optimization and algorithmic techniques the position they must occupy in the lattice (L sup d) in the order to maximize their visibility can be determined in polynomial time, for any given number s less than or equal to (5 sup d) of cameras. This improves previous results for s less than or equal to (3 sup d).

  13. Cell phone camera ballistics: attacks and countermeasures

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Liu, Huajian; Fan, Peishuai; Katzenbeisser, Stefan

    2010-01-01

    Multimedia forensics deals with the analysis of multimedia data to gather information on its origin and authenticity. One therefore needs to distinguish classical criminal forensics (which today also uses multimedia data as evidence) and multimedia forensics where the actual case is based on a media file. One example for the latter is camera forensics where pixel error patters are used as fingerprints identifying a camera as the source of an image. Of course multimedia forensics can become a tool for criminal forensics when evidence used in a criminal investigation is likely to be manipulated. At this point an important question arises: How reliable are these algorithms? Can a judge trust their results? How easy are they to manipulate? In this work we show how camera forensics can be attacked and introduce a potential countermeasure against these attacks.

  14. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  15. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  16. Lightweight, Compact, Long Range Camera Design

    NASA Astrophysics Data System (ADS)

    Shafer, Donald V.

    1983-08-01

    The model 700 camera is the latest in a 30-year series of LOROP cameras developed by McDonnell Douglas Astronautics Company (MDAC) and their predecessor companies. The design achieves minimum size and weight and is optimized for low-contrast performance. The optical system includes a 66-inch focal length, f/5.6, apochromatic lens and three folding mirrors imaging on a 4.5-inch square format. A three-axis active stabilization system provides the capability for long exposure time and, hence, fine grain films can be used. The optical path forms a figure "4" behind the lens. In front of the lens is a 45° pointing mirror. This folded configuration contributed greatly to the lightweight and compact design. This sequential autocycle frame camera has three modes of operation with one, two, and three step positions to provide a choice of swath widths within the range of lateral coverage. The magazine/shutter assembly rotates in relationship with the pointing mirror and aircraft drift angle to maintain film format alignment with the flight path. The entire camera is angular rate stabilized in roll, pitch, and yaw. It also employs a lightweight, electro-magnetically damped, low-natural-frequency spring suspension for passive isolation from aircraft vibration inputs. The combined film transport and forward motion compensation (FMC) mechanism, which is operated by a single motor, is contained in a magazine that can, depending on accessibility which is installation dependent, be changed in flight. The design also stresses thermal control, focus control, structural stiffness, and maintainability. The camera is operated from a remote control panel. This paper describes the leading particulars and features of the camera as related to weight and configuration.

  17. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; Macleod, Todd; Gagliano, Larry

    2015-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  18. Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2001-01-01

    The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.

  19. Camera-enabled techniques for organic synthesis

    PubMed Central

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  20. Analysis of Brown camera distortion model

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  1. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  2. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  3. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  4. Cameras, Computers Help to Decipher Ancient Texts.

    ERIC Educational Resources Information Center

    Coughlin, Ellen K.

    1987-01-01

    Epigrapher and philologist Bruce Zuckerman, directs an archive of photographs and other images of ancient biblical and related texts. By using sophisticated technical photography and computer graphics, he makes his photographs of ancient texts reveal more than a camera alone ever could. (MLW)

  5. Lights, Camera, Read! Arizona Reading Program Manual.

    ERIC Educational Resources Information Center

    Arizona State Dept. of Library, Archives and Public Records, Phoenix.

    This document is the manual for the Arizona Reading Program (ARP) 2003 entitled "Lights, Camera, Read!" This theme spotlights books that were made into movies, and allows readers to appreciate favorite novels and stories that have progressed to the movie screen. The manual consists of eight sections. The Introduction includes welcome letters from…

  6. Autofocus method for scanning remote sensing cameras.

    PubMed

    Lv, Hengyi; Han, Chengshan; Xue, Xucheng; Hu, Changhong; Yao, Cheng

    2015-07-10

    Autofocus methods are conventionally based on capturing the same scene from a series of positions of the focal plane. As a result, it has been difficult to apply this technique to scanning remote sensing cameras where the scenes change continuously. In order to realize autofocus in scanning remote sensing cameras, a novel autofocus method is investigated in this paper. Instead of introducing additional mechanisms or optics, the overlapped pixels of the adjacent CCD sensors on the focal plane are employed. Two images, corresponding to the same scene on the ground, can be captured at different times. Further, one step of focusing is done during the time interval, so that the two images can be obtained at different focal plane positions. Subsequently, the direction of the next step of focusing is calculated based on the two images. The analysis shows that the method investigated operates without restriction of the time consumption of the algorithm and realizes a total projection for general focus measures and algorithms from digital still cameras to scanning remote sensing cameras. The experiment results show that the proposed method is applicable to the entire focus measure family, and the error ratio is, on average, no more than 0.2% and drops to 0% by reliability improvement, which is lower than that of prevalent approaches (12%). The proposed method is demonstrated to be effective and has potential in other scanning imaging applications.

  7. Teaching Camera Calibration by a Constructivist Methodology

    ERIC Educational Resources Information Center

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  8. Camera! Action! Collaborate with Digital Moviemaking

    ERIC Educational Resources Information Center

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  9. Surveillance Cameras in Schools: An Ethical Analysis

    ERIC Educational Resources Information Center

    Warnick, Bryan R.

    2007-01-01

    In this essay, Bryan R. Warnick responds to the increasing use of surveillance cameras in public schools by examining the ethical questions raised by their use. He explores the extent of a student's right to privacy in schools, stipulates how video surveillance is similar to and different from commonly accepted in-person surveillance practices,…

  10. The Sloan Digital Sky Survey Photometric Camera

    SciTech Connect

    Gunn, J.E.; Carr, M.; Rockosi, C.; Sekiguchi, M.; Berry, K.; Elms, B.; de Haas, E.; Ivezic, Z.; Knapp, G.; Lupton, R.; Pauls, G.; Simcoe, R.; Hirsch, R.; Sanford, D.; Wang, S.; York, D.; Harris, F.; Annis, J.; Bartozek, L.; Boroski, W.; Bakken, J.; Haldeman, M.; Kent, S.; Holm, S.; Holmgren, D.; Petravick, D.; Prosapio, A.; Rechenmacher, R.; Doi, M.; Fukugita, M.; Shimasaku, K.; Okada, N.; Hull, C.; Siegmund, W.; Mannery, E.; Blouke, M.; Heidtman, D.; Schneider, D.; Lucinio, R.; and others

    1998-12-01

    We have constructed a large-format mosaic CCD camera for the Sloan Digital Sky Survey. The camera consists of two arrays, a photometric array that uses 30 2048 {times} 2048 SITe/Tektronix CCDs (24 {mu}m pixels) with an effective imaging area of 720 cm{sup 2} and an astrometric array that uses 24 400 {times} 2048 CCDs with the same pixel size, which will allow us to tie bright astrometric standard stars to the objects imaged in the photometric camera. The instrument will be used to carry out photometry essentially simultaneously in five color bands spanning the range accessible to silicon detectors on the ground in the time-delay{endash}and{endash}integrate (TDI) scanning mode. The photometric detectors are arrayed in the focal plane in six columns of five chips each such that two scans cover a filled stripe 2&arcdeg;5 wide. This paper presents engineering and technical details of the camera. {copyright} {ital {copyright} 1998.} {ital The American Astronomical Society}

  11. Camera Systems Rapidly Scan Large Structures

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  12. Video Analysis with a Web Camera

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  13. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  14. GAMPIX: A new generation of gamma camera

    NASA Astrophysics Data System (ADS)

    Gmar, M.; Agelou, M.; Carrel, F.; Schoepff, V.

    2011-10-01

    Gamma imaging is a technique of great interest in several fields such as homeland security or decommissioning/dismantling of nuclear facilities in order to localize hot spots of radioactivity. In the nineties, previous works led by CEA LIST resulted in the development of a first generation of gamma camera called CARTOGAM, now commercialized by AREVA CANBERRA. Even if its performances can be adapted to many applications, its weight of 15 kg can be an issue. For several years, CEA LIST has been developing a new generation of gamma camera, called GAMPIX. This system is mainly based on the Medipix2 chip, hybridized to a 1 mm thick CdTe substrate. A coded mask replaces the pinhole collimator in order to increase the sensitivity of the gamma camera. Hence, we obtained a very compact device (global weight less than 1 kg without any shielding), which is easy to handle and to use. In this article, we present the main characteristics of GAMPIX and we expose the first experimental results illustrating the performances of this new generation of gamma camera.

  15. Lightweight Electronic Camera for Research on Clouds

    NASA Technical Reports Server (NTRS)

    Lawson, Paul

    2006-01-01

    "Micro-CPI" (wherein "CPI" signifies "cloud-particle imager") is the name of a small, lightweight electronic camera that has been proposed for use in research on clouds. It would acquire and digitize high-resolution (3- m-pixel) images of ice particles and water drops at a rate up to 1,000 particles (and/or drops) per second.

  16. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  17. Camera calibration using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hui, Nirmal Baran; Pratihar, Dilip Kumar

    2008-12-01

    An autonomous robot will have to detect moving obstacles online before it can plan its collision-free path, while navigating in a dynamic environment. The robot collects information about the environment with the help of a camera and determines the inputs for its motion planner through image analysis. The present article deals with issues related to camera calibration and online image processing. The problem of camera calibration is treated as an optimization problem and solved using a genetic algorithm so as to achieve minimum distorted image plane error. The calibrated vision system is then utilized for the detection and identification of the objects by analysing the images collected at regular intervals. For image processing, five different operations, such as median filtering, thresholding, perimeter estimation, labelling and size filtering, have been carried out. To show the effectiveness of the developed camera-based vision system, inputs of the motion planner of a navigating robot are calculated for two different cases. It is observed that online detection of the shapes and configurations of the obstacles is possible by using the vision system developed.

  18. EOD Facilities Manual. Camera Calibration Laboratory Capabilities

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The tests and equipment are described for measuring the exact performance characteristics of camera systems for earth resources, space, and other applications. The tests discussed include: modulation transfer function, field irradiance, veiling glare, T-number tests, shutter speed, spectral transmission, and focal length.

  19. Development of a multispectral camera system

    NASA Astrophysics Data System (ADS)

    Sugiura, Hiroaki; Kuno, Tetsuya; Watanabe, Norihiro; Matoba, Narihiro; Hayashi, Junichiro; Miyake, Yoichi

    2000-05-01

    A highly accurate multispectral camera and the application software have been developed as a practical system to capture digital images of the artworks stored in galleries and museums. Instead of recording color data in the conventional three RGB primary colors, the newly developed camera and the software carry out a pixel-wise estimation of spectral reflectance, the color data specific to the object, to enable the practical multispectral imaging. In order to realize the accurate multispectral imaging, the dynamic range of the camera is set to 14 bits or over and the output bits to 14 bits so as to allow capturing even when the difference in light quantity between the each channel is large. Further, a small-size rotary color filter was simultaneously developed to keep the camera to a practical size. We have developed software capable of selecting the optimum combination of color filters available in the market. Using this software, n types of color filter can be selected from m types of color filter giving a minimum Euclidean distance or minimum color difference in CIELAB color space between actual and estimated spectral reflectance as to 147 types of oil paint samples.

  20. Optical Design of the LSST Camera

    SciTech Connect

    Olivier, S S; Seppala, L; Gilmore, K

    2008-07-16

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a camera system that includes a set of broad-band filters and refractive corrector lenses to produce a flat focal plane with a field of view of 9.6 square degrees. Optical design of the camera lenses and filters is integrated with optical design of telescope mirrors to optimize performance, resulting in excellent image quality over the entire field from ultra-violet to near infra-red wavelengths. The LSST camera optics design consists of three refractive lenses with clear aperture diameters of 1.55 m, 1.10 m and 0.69 m and six interchangeable, broad-band, filters with clear aperture diameters of 0.75 m. We describe the methodology for fabricating, coating, mounting and testing these lenses and filters, and we present the results of detailed tolerance analyses, demonstrating that the camera optics will perform to the specifications required to meet their performance goals.

  1. Camera calibration based on parallel lines

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Zhang, Yuhai; Zhao, Yu

    2015-01-01

    Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

  2. Digital Camera Project Fosters Communication Skills

    ERIC Educational Resources Information Center

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  3. Shadowgraph illumination techniques for framing cameras

    SciTech Connect

    Malone, R.M.; Flurer, R.L.; Frogget, B.C.; Sorenson, D.S.; Holmes, V.H.; Obst, A.W.

    1997-06-01

    Many pulse power applications in use at the Pegasus facility at the Los Alamos National Laboratory require specialized imaging techniques. Due to the short event duration times, visible images are recorded by high speed electronic framing cameras. Framing cameras provide the advantages of high speed movies of back light experiments. These high speed framing cameras require bright illumination sources to record images with 10 ns integration times. High power lasers offer sufficient light for back illuminating the target assemblies; however, laser speckle noise lowers the contrast in the image. Laser speckle noise also limits the effective resolution. This discussion focuses on the use of telescopes to collect images 50 feet away. Both light field and dark field illumination techniques are compared. By adding relay lenses between the assembly target and the telescope, a high resolution magnified image can be recorded. For dark field illumination, these relay lenses can be used to separate the object field from the illumination laser. The illumination laser can be made to focus onto the opaque secondary of a Schmidt telescope. Thus, the telescope only collects scattered light from the target assembly. This dark field illumination eliminates the laser speckle noise and allows high resolution images to be recorded. Using the secondary of the telescope to block the illumination laser makes dark field illumination an ideal choice for the framing camera.

  4. The Legal Implications of Surveillance Cameras

    ERIC Educational Resources Information Center

    Steketee, Amy M.

    2012-01-01

    The nature of school security has changed dramatically over the last decade. Schools employ various measures, from metal detectors to identification badges to drug testing, to promote the safety and security of staff and students. One of the increasingly prevalent measures is the use of security cameras. In fact, the U.S. Department of Education…

  5. Digital Camera Control for Faster Inspection

    NASA Technical Reports Server (NTRS)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  6. Ultraviolet Viewing with a Television Camera.

    ERIC Educational Resources Information Center

    Eisner, Thomas; And Others

    1988-01-01

    Reports on a portable video color camera that is fully suited for seeing ultraviolet images and offers some expanded viewing possibilities. Discusses the basic technique, specialized viewing, and the instructional value of this system of viewing reflectance patterns of flowers and insects that are invisible to the unaided eye. (CW)

  7. Camera for Quasars in Early Universe (CQUEAN)

    NASA Astrophysics Data System (ADS)

    Park, Won-Kee; Pak, Soojong; Im, Myungshin; Choi, Changsu; Jeon, Yiseul; Chang, Seunghyuk; Jeong, Hyeonju; Lim, Juhee; Kim, Eunbin

    2012-08-01

    We describe the overall characteristics and the performance of an optical CCD camera system, Camera for Quasars in Early Universe (CQUEAN), which has been used at the 2.1 m Otto Struve Telescope of the McDonald Observatory since 2010 August. CQUEAN was developed for follow-up imaging observations of red sources such as high-redshift quasar candidates (z gsim 5), gamma-ray bursts, brown dwarfs, and young stellar objects. For efficient observations of the red objects, CQUEAN has a science camera with a deep-depletion CCD chip, which boasts a higher quantum efficiency at 0.7-1.1 μm than conventional CCD chips. The camera was developed in a short timescale (~1 yr) and has been working reliably. By employing an autoguiding system and a focal reducer to enhance the field of view on the classical Cassegrain focus, we achieve a stable guiding in 20 minute exposures, an imaging quality with FWHM>=0.6'' over the whole field (4.8' × 4.8'), and a limiting magnitude of z = 23.4 AB mag at 5-σ with 1 hr total integration time. This article includes data taken at the McDonald Observatory of The University of Texas at Austin.

  8. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL.... This generic type of device may include signal analysis and display equipment, patient and...

  9. Optimum design of uncooled staring infrared camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Pan, Debin; Liu, Aidong; Geng, Anbing; Li, Yong; He, Jun

    2006-02-01

    Several models of target acquisition range prediction of the uncooled staring camera and their advantages are proposed in the paper. NVTherm is used to evaluate the modulation transfer function, minimum resolvable temperature difference and target acquisition range. The analysis result shows that the performance of the detector is the key factor to limit the performance of the uncooled staring camera. The target acquisition range of the uncooled infrared camera can be improved by increasing effective focus length (EFL) of optical component, decreasing its F/# or reducing the pixel pitch of the detector. The detection range of 1.09 km can be achieved under the condition of 75 mm EFL and F/0.8. When the EFL changes from 75mm to 150 mm under the condition of F/0.8 and 45μm pixel pitch, the detection range of 2.36 km, recognition range of 0.47 km and identification range of 0.24 km have been gotten. When the pixel pitch is reduced to 35μm, the detection range is 2.59 km. Furthermore, when 2 x 2 microscan is adopted in the camera design, then the pixel pitch will change from 35μm to 17.5μm. Although the infrared camera becomes an optical performance limited system, its performance improves a lot to get the detection range of 2.94 km. The field test shows that the detection range to a 1.7 m x 0.45 m target is 2.2 km under the condition of F/0.8, 150mm EFL and 45 μm pixel pitch, achieving good matches with the evaluation value of 2.36 km through NVTherm. An optimum uncooled infrared design is achieved using the NVTherm software which shortens the design cycle.

  10. Measuring rainfall with low-cost cameras

    NASA Astrophysics Data System (ADS)

    Allamano, Paola; Cavagnero, Paolo; Croci, Alberto; Laio, Francesco

    2016-04-01

    In Allamano et al. (2015), we propose to retrieve quantitative measures of rainfall intensity by relying on the acquisition and analysis of images captured from professional cameras (SmartRAIN technique in the following). SmartRAIN is based on the fundamentals of camera optics and exploits the intensity changes due to drop passages in a picture. The main steps of the method include: i) drop detection, ii) blur effect removal, iii) estimation of drop velocities, iv) drop positioning in the control volume, and v) rain rate estimation. The method has been applied to real rain events with errors of the order of ±20%. This work aims to bridge the gap between the need of acquiring images via professional cameras and the possibility of exporting the technique to low-cost webcams. We apply the image processing algorithm to frames registered with low-cost cameras both in the lab (i.e., controlled rain intensity) and field conditions. The resulting images are characterized by lower resolutions and significant distortions with respect to professional camera pictures, and are acquired with fixed aperture and a rolling shutter. All these hardware limitations indeed exert relevant effects on the readability of the resulting images, and may affect the quality of the rainfall estimate. We demonstrate that a proper knowledge of the image acquisition hardware allows one to fully explain the artefacts and distortions due to the hardware. We demonstrate that, by correcting these effects before applying the image processing algorithm, quantitative rain intensity measures are obtainable with a good accuracy also with low-cost modules.

  11. A novel fully integrated handheld gamma camera

    NASA Astrophysics Data System (ADS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  12. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  13. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  14. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  15. National Guidelines for Digital Camera Systems Certification

    NASA Astrophysics Data System (ADS)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  16. Method for out-of-focus camera calibration.

    PubMed

    Bell, Tyler; Xu, Jing; Zhang, Song

    2016-03-20

    State-of-the-art camera calibration methods assume that the camera is at least nearly in focus and thus fail if the camera is substantially defocused. This paper presents a method which enables the accurate calibration of an out-of-focus camera. Specifically, the proposed method uses a digital display (e.g., liquid crystal display monitor) to generate fringe patterns that encode feature points into the carrier phase; these feature points can be accurately recovered, even if the fringe patterns are substantially blurred (i.e., the camera is substantially defocused). Experiments demonstrated that the proposed method can accurately calibrate a camera regardless of the amount of defocusing: the focal length difference is approximately 0.2% when the camera is focused compared to when the camera is substantially defocused.

  17. Controlling TV-Camera f-Stop Remotely

    NASA Technical Reports Server (NTRS)

    Talley, G. L., Jr.; Herbison, D. R.; Routh, G. F.

    1984-01-01

    Lens opening of television camera controlled manually from remote location by simple and inexpensive data link without modifications to camera lens system. Allows closeup views of wide-brightness-range events otherwise hazardous for human operator.

  18. 5. VAL CAMERA CAR, DETAIL OF HOIST AT SIDE OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. VAL CAMERA CAR, DETAIL OF HOIST AT SIDE OF BRIDGE AND ENGINE CAR ON TRACKS, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. Raster linearity of video cameras calibrated with precision tester

    NASA Technical Reports Server (NTRS)

    1964-01-01

    The time between transitions in the video output of a camera is measured when registered at reticle marks on the vidicon faceplate. This device permits precision calibration of raster linearity of television camera tubes.

  20. Exterior view to the southeast of the west camera bunker ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Exterior view to the southeast of the west camera bunker outside the fenced facility area - Nevada Test Site, Test Cell C Facility, West Camera Bunker, Area 25, Jackass Flats, Road J, Mercury, Nye County, NV

  1. Cameras on the moon with Apollos 15 and 16.

    NASA Technical Reports Server (NTRS)

    Page, T.

    1972-01-01

    Description of the cameras used for photography and television by Apollo 15 and 16 missions, covering a hand-held Hasselblad camera for black and white panoramic views at locations visited by the astronauts, a special stereoscopic camera designed by astronomer Tom Gold, a 16-mm movie camera used on the Apollo 15 and 16 Rovers, and several TV cameras. Details are given on the far-UV camera/spectrograph of the Apollo 16 mission. An electronographic camera converts UV light to electrons which are ejected by a KBr layer at the focus of an f/1 Schmidt camera and darken photographic films much more efficiently than far-UV. The astronomical activity of the Apollo 16 astronauts on the moon, using this equipment, is discussed.

  2. World's fastest and most sensitive astronomical camera

    NASA Astrophysics Data System (ADS)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  3. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  4. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  5. Slit-Drum Camera For Projectile Studies

    NASA Astrophysics Data System (ADS)

    Liangyi, Chen; Shaoxiang, Zhou; Guanhua, Cha; Yuxi, Hu

    1983-03-01

    The' model XF-70 slit-drum camera has been developed to record projectile in flight for observation and acquisition. It has two operation modes: (1) synchro-ballistic photography, (2) streak record. The film is located on the inner surface of rotating drum to make it travel. The folding mirror is arranged to reflect light beam 90 degree on to film. The assembly of folding mirror and slit aperture can be together rotated about the optical axis of objective so that the camera makes a feature of recording projectile having any launching angle either in synchro-ballistic photography or in streak record through prerotating the folding mirror assembly by an appropriate angle. The mechanical-electric shutter preventing film from reexposing is close to the slit aperture. The loading mechanism is designed for use in daylight. LED fiducial mark and timing mark are printed at the edges of the frame for accurate measurements.

  6. First Polarised Light with the NIKA Camera

    NASA Astrophysics Data System (ADS)

    Ritacco, A.; Adam, R.; Adane, A.; Ade, P.; André, P.; Beelen, A.; Belier, B.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; D'Addabbo, A.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Leclercq, S.; Macías-Pérez, J. F.; Martino, J.; Mauskopf, P.; Maury, A.; Mayet, F.; Monfardini, A.; Pajot, F.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Rebolo-Iglesias, M.; Revéret, V.; Rodriguez, L.; Savini, G.; Schuster, K.; Sievers, A.; Thum, C.; Triqueneaux, S.; Tucker, C.; Zylka, R.

    2016-08-01

    NIKA is a dual-band camera operating with 315 frequency multiplexed LEKIDs cooled at 100 mK. NIKA is designed to observe the sky in intensity and polarisation at 150 and 260 GHz from the IRAM 30-m telescope. It is a test-bench for the final NIKA2 camera. The incoming linear polarisation is modulated at four times the mechanical rotation frequency by a warm rotating multi-layer half- wave plate. Then, the signal is analyzed by a wire grid and finally absorbed by the lumped element kinetic inductance detectors (LEKIDs). The small time constant (<1 ms ) of the LEKIDs combined with the modulation of the HWP enables the quasi-simultaneous measurement of the three Stokes parameters I, Q, U, representing linear polarisation. In this paper, we present the results of recent observational campaigns demonstrating the good performance of NIKA in detecting polarisation at millimeter wavelength.

  7. SLAM using camera and IMU sensors.

    SciTech Connect

    Rothganger, Fredrick H.; Muguira, Maritza M.

    2007-01-01

    Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.

  8. A filter spectrometer concept for facsimile cameras

    NASA Technical Reports Server (NTRS)

    Jobson, D. J.; Kelly, W. L., IV; Wall, S. D.

    1974-01-01

    A concept which utilizes interference filters and photodetector arrays to integrate spectrometry with the basic imagery function of a facsimile camera is described and analyzed. The analysis considers spectral resolution, instantaneous field of view, spectral range, and signal-to-noise ratio. Specific performance predictions for the Martian environment, the Viking facsimile camera design parameters, and a signal-to-noise ratio for each spectral band equal to or greater than 256 indicate the feasibility of obtaining a spectral resolution of 0.01 micrometers with an instantaneous field of view of about 0.1 deg in the 0.425 micrometers to 1.025 micrometers range using silicon photodetectors. A spectral resolution of 0.05 micrometers with an instantaneous field of view of about 0.6 deg in the 1.0 to 2.7 micrometers range using lead sulfide photodetectors is also feasible.

  9. Blind identification of cellular phone cameras

    NASA Astrophysics Data System (ADS)

    Çeliktutan, Oya; Avcibas, Ismail; Sankur, Bülent

    2007-02-01

    In this paper, we focus on blind source cell-phone identification problem. It is known various artifacts in the image processing pipeline, such as pixel defects or unevenness of the responses in the CCD sensor, black current noise, proprietary interpolation algorithms involved in color filter array [CFA] leave telltale footprints. These artifacts, although often imperceptible, are statistically stable and can be considered as a signature of the camera type or even of the individual device. For this purpose, we explore a set of forensic features, such as binary similarity measures, image quality measures and higher order wavelet statistics in conjunction SVM classifier to identify the originating cell-phone type. We provide identification results among 9 different brand cell-phone cameras. In addition to our initial results, we applied a set of geometrical operations to original images in order to investigate how much our proposed method is robust under these manipulations.

  10. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-05

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  11. Aquatic Debris Detection Using Embedded Camera Sensors

    PubMed Central

    Wang, Yong; Wang, Dianhong; Lu, Qian; Luo, Dapeng; Fang, Wu

    2015-01-01

    Aquatic debris monitoring is of great importance to human health, aquatic habitats and water transport. In this paper, we first introduce the prototype of an aquatic sensor node equipped with an embedded camera sensor. Based on this sensing platform, we propose a fast and accurate debris detection algorithm. Our method is specifically designed based on compressive sensing theory to give full consideration to the unique challenges in aquatic environments, such as waves, swaying reflections, and tight energy budget. To upload debris images, we use an efficient sparse recovery algorithm in which only a few linear measurements need to be transmitted for image reconstruction. Besides, we implement the host software and test the debris detection algorithm on realistically deployed aquatic sensor nodes. The experimental results demonstrate that our approach is reliable and feasible for debris detection using camera sensors in aquatic environments. PMID:25647741

  12. An unmanned watching system using video cameras

    SciTech Connect

    Kaneda, K.; Nakamae, E. ); Takahashi, E. ); Yazawa, K. )

    1990-04-01

    Techniques for detecting intruders at a remote location, such as a power plant or substation, or in an unmanned building at night, are significant in the field of unmanned watching systems. This article describes an unmanned watching system to detect trespassers in real time, applicable both indoors and outdoors, based on image processing. The main part of the proposed system consists of a video camera, an image processor and a microprocessor. Images are input from the video camera to the image processor every 1/60 second, and objects which enter the image are detected by measuring changes of intensity level in selected sensor areas. This article discusses the system configuration and the detection method. Experimental results under a range of environmental conditions are given.

  13. Camera Augmented Mobile C-arm

    NASA Astrophysics Data System (ADS)

    Wang, Lejing; Weidert, Simon; Traub, Joerg; Heining, Sandro Michael; Riquarts, Christian; Euler, Ekkehard; Navab, Nassir

    The Camera Augmented Mobile C-arm (CamC) system that extends a regular mobile C-arm by a video camera provides an X-ray and video image overlay. Thanks to the mirror construction and one time calibration of the device, the acquired X-ray images are co-registered with the video images without any calibration or registration during the intervention. It is very important to quantify and qualify the system before its introduction into the OR. In this communication, we extended the previously performed overlay accuracy analysis of the CamC system by another clinically important parameter, the applied radiation dose for the patient. Since the mirror of the CamC system will absorb and scatter radiation, we introduce a method for estimating the correct applied dose by using an independent dose measurement device. The results show that the mirror absorbs and scatters 39% of X-ray radiation.

  14. Declarative camera control for automatic cinematography

    SciTech Connect

    Christianson, D.B.; Anderson, S.E.; Li-wei He

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  15. Advanced EVA Suit Camera System Development Project

    NASA Technical Reports Server (NTRS)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  16. Foote's Law and its application to cameras

    NASA Astrophysics Data System (ADS)

    Kim, Charles C.

    2016-05-01

    In modeling and characterizing a focal plane array (FPA) with a uniform source, estimating the irradiance on the FPA is inevitable. Many have developed needed formulas for the estimate. Those formulas mostly focus on one pixel of the FPA on the optical axis, ignoring all the other pixels. I use Foote's law here to derive the formulas for all the pixels in a simple configuration where the FPA is directly exposed to the uniform source. I extend the formulas for two more configurations: the FPA enclosed with a baffle and the FPA housed with a lens. My results are compared with some existing formulas. They show differences, yet reach an agreement with some approximations. My formulas are useful for modeling and trade study for cameras, especially for the cameras with wide field of view.

  17. Automatic exposure control for space sequential camera

    NASA Technical Reports Server (NTRS)

    Mcatee, G. E., Jr.; Stoap, L. J.; Solheim, C. D.; Sharpsteen, J. T.

    1975-01-01

    The final report for the automatic exposure control study for space sequential cameras, for the NASA Johnson Space Center is presented. The material is shown in the same sequence that the work was performed. The purpose of the automatic exposure control is to automatically control the lens iris as well as the camera shutter so that the subject is properly exposed on the film. A study of design approaches is presented. Analysis of the light range of the spectrum covered indicates that the practical range would be from approximately 20 to 6,000 foot-lamberts, or about nine f-stops. Observation of film available from space flights shows that optimum scene illumination is apparently not present in vehicle interior photography as well as in vehicle-to-vehicle situations. The evaluation test procedure for a breadboard, and the results, which provided information for the design of a brassboard are given.

  18. Cervical SPECT Camera for Parathyroid Imaging

    SciTech Connect

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  19. Procurement specification color graphic camera system

    NASA Technical Reports Server (NTRS)

    Prow, G. E.

    1980-01-01

    The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.

  20. Technician Setting up RCA Television Camera

    NASA Technical Reports Server (NTRS)

    1957-01-01

    A television camera is focused by NACA technician on a ramjet engine model through the schlieren optical windows of the 10 x 10 Foot Supersonic Wind Tunnel's test section. Closed-circuit television enables aeronautical research scientists to view the ramjet, used for propelling missiles, while the wind tunnel is operating at speeds from 1500 to 2500 mph. (8.570) The tests were performed at the Lewis Flight Propulsion Laboratory, now John H. Glenn Research Center.

  1. Status of the Los Alamos Anger camera

    SciTech Connect

    Seeger, P.A.; Nutter, M.J.

    1985-01-01

    Results of preliminary tests of the neutron Anger camera being developed at Los Alamos are presented. This detector uses a unique encoding scheme involving parellel processing of multiple receptive fields. Design goals have not yet been met, but the results are very encouraging and improvements in the test procedures are expected to show that the detector will be ready for use on a small-angle scattering instrument next year. 3 refs., 4 figs.

  2. Volcanological applications of SO2 cameras

    NASA Astrophysics Data System (ADS)

    Burton, M. R.; Prata, F.; Platt, U.

    2015-07-01

    Ground-based volcanic gas and ash imaging has the potential to revolutionise the way in which volcanoes are monitored and studied. The ability to track and quantify volcanic emissions in space and time with unprecedented fidelity opens the door to integration with geophysical measurements, allowing breakthroughs in our understanding of the physical processes driving volcanic activity. In May 2013 a European Science Foundation funded Plume Imaging workshop was conducted in Stromboli, Italy, with the objective of bringing the ground-based volcanic plume imaging community together in order to examine the state of the art, and move towards a 'best-practice' for volcanic ash and gas imaging techniques. A particular focus was the development of SO2 imaging systems, or SO2 cameras, with six teams deploying and testing various designs of ultraviolet and infrared-based imaging systems capable of imagining SO2. One conclusion of the workshop was that the term 'SO2 camera' should be applied to any SO2 imaging system, regardless of wavelength of radiation used. This Special Issue on Volcanic Plume Imaging is the direct result of the Stromboli workshop, and together the papers presented here represent the state of the art of ground-based volcano plume imaging science and technology. In this work, we examine in detail the volcanological applications of the SO2 camera, reviewing previous works and placing the new research contained in this Special Issue in context. The development of the SO2 camera, and future developments extending imaging to other volcanic gases, is one of the most exciting and novel research frontiers in volcanology today.

  3. Using a portable holographic camera in cosmetology

    NASA Astrophysics Data System (ADS)

    Bakanas, R.; Gudaitis, G. A.; Zacharovas, S. J.; Ratcliffe, D. B.; Hirsch, S.; Frey, S.; Thelen, A.; Ladrière, N.; Hering, P.

    2006-07-01

    The HSF-MINI portable holographic camera is used to record holograms of the human face. The recorded holograms are analyzed using a unique three-dimensional measurement system that provides topometric data of the face with resolution less than or equal to 0.5 mm. The main advantages of this method over other, more traditional methods (such as laser triangulation and phase-measurement triangulation) are discussed.

  4. Rank-based camera spectral sensitivity estimation.

    PubMed

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method. PMID:27140768

  5. Multiple-camera tracking: UK government requirements

    NASA Astrophysics Data System (ADS)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  6. Rank-based camera spectral sensitivity estimation.

    PubMed

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method.

  7. STS-79 Flight deck camera during TCDT

    NASA Technical Reports Server (NTRS)

    1996-01-01

    A camera inside the flight deck of the Space Shuttle Atlantis previews crew seating assignments come launch day. Seated in front are STS-79 Commander William F. Readdy (left) and Pilot Terrence W. Wilcutt. Seated elsewhere are Mission Specialists Carl E. Walz, Tom Akers, John E. Blaha and Jay Apt. This photo was taken during the Terminal Countdown Demonstration Test (TCDT), a dress rehearsal for launch. Atlantis is scheduled for liftoff on STS-79 no earlier than Sept. 12.

  8. Detecting Leaks With An Infrared Camera

    NASA Technical Reports Server (NTRS)

    Easter, Barry P.; Steffins, Alfred P., Jr.

    1994-01-01

    Proposed test reveals small leak in gas pipe - for example, leak through fatigue crack induced by vibration - even though insulation covers pipe. Infrared-sensitive video camera aimed at part(s) containing suspected leak(s). Insulated pipe pressurized with gas that absorbs infrared light. If crack were present, escaping gas travels along outside of pipe until it reached edge of insulation. Gas emerging from edge of insulation appears as dark cloud in video image.

  9. Worldview and route planning using live public cameras

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Chen, Wenyi; Gingade, Ganesh; Lu, Yung-Hsiang

    2015-03-01

    Planning a trip needs to consider many unpredictable factors along the route such as traffic, weather, accidents, etc. People are interested viewing the places they plan to visit and the routes they plan to take. This paper presents a system with an Android mobile application that allows users to: (i) Watch the live feeds (videos or snapshots) from more than 65,000 geotagged public cameras around the world. The user can select the cameras using an interactive world map. (ii) Search for and watch the live feeds from the cameras along the route between a starting point and a destination. The system consists of a server which maintains a database with the cameras' information, and a mobile application that shows the camera map and communicates with the cameras. In order to evaluate the system, we compare it with existing systems in terms of the total number of cameras, the cameras' coverage, and the number of cameras on various routes. We also discuss the response time of loading the camera map, finding the cameras on a route, and communicating with the cameras.

  10. The first satellite laser echoes recorded on the streak camera

    NASA Technical Reports Server (NTRS)

    Hamal, Karel; Prochazka, Ivan; Kirchner, Georg; Koidl, F.

    1993-01-01

    The application of the streak camera with the circular sweep for the satellite laser ranging is described. The Modular Streak Camera system employing the circular sweep option was integrated into the conventional Satellite Laser System. The experimental satellite tracking and ranging has been performed. The first satellite laser echo streak camera records are presented.

  11. 39 CFR 3001.31a - In camera orders.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false In camera orders. 3001.31a Section 3001.31a Postal... Applicability § 3001.31a In camera orders. (a) Definition. Except as hereinafter provided, documents and testimony made subject to in camera orders are not made a part of the public record, but are...

  12. 16 CFR 3.45 - In camera orders.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 1 2014-01-01 2014-01-01 false In camera orders. 3.45 Section 3.45... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Hearings § 3.45 In camera orders. (a) Definition. Except as hereinafter provided, material made subject to an in camera order will be kept confidential and not placed...

  13. 16 CFR 3.45 - In camera orders.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 1 2012-01-01 2012-01-01 false In camera orders. 3.45 Section 3.45... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Hearings § 3.45 In camera orders. (a) Definition. Except as hereinafter provided, material made subject to an in camera order will be kept confidential and not placed...

  14. 16 CFR 3.45 - In camera orders.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false In camera orders. 3.45 Section 3.45... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Hearings § 3.45 In camera orders. (a) Definition. Except as hereinafter provided, material made subject to an in camera order will be kept confidential and not placed...

  15. 16 CFR 3.45 - In camera orders.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 1 2013-01-01 2013-01-01 false In camera orders. 3.45 Section 3.45... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Hearings § 3.45 In camera orders. (a) Definition. Except as hereinafter provided, material made subject to an in camera order will be kept confidential and not placed...

  16. Astronaut Jack Lousma works at Multispectral camera experiment

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Astronaut Jack R. Lousma, Skylab 3 pilot, works at the S190A multispectral camera experiment in the Multiple Docking Adapter (MDA), seen from a color television transmission made by a TV camera aboard the Skylab space station cluster in Earth orbit. Lousma later used a small brush to clean the six lenses of the multispectral camera.

  17. 11. COMPLETED FIXED CAMERA STATION 1700 (BUILDING NO. 42022) LOOKING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. COMPLETED FIXED CAMERA STATION 1700 (BUILDING NO. 42022) LOOKING WEST SHOWING WINDOW OPENING FOR CAMERA, March 31, 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  18. Solid state television camera has no imaging tube

    NASA Technical Reports Server (NTRS)

    Huggins, C. T.

    1972-01-01

    Camera with characteristics of vidicon camera and greater resolution than home TV receiver uses mosaic of phototransistors. Because of low power and small size, camera has many applications. Mosaics can be used as cathode ray tubes and analog-to-digital converters.

  19. The NASA - Arc 10/20 micron camera

    NASA Technical Reports Server (NTRS)

    Roellig, T. L.; Cooper, R.; Deutsch, L. K.; Mccreight, C.; Mckelvey, M.; Pendleton, Y. J.; Witteborn, F. C.; Yuen, L.; Mcmahon, T.; Werner, M. W.

    1994-01-01

    A new infrared camera (AIR Camera) has been developed at NASA - Ames Research Center for observations from ground-based telescopes. The heart of the camera is a Hughes 58 x 62 pixel Arsenic-doped Silicon detector array that has the spectral sensitivity range to allow observations in both the 10 and 20 micron atmospheric windows.

  20. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  1. Remote hardware-reconfigurable robotic camera

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  2. BAE systems' SMART chip camera FPA development

    NASA Astrophysics Data System (ADS)

    Sengupta, Louise; Auroux, Pierre-Alain; McManus, Don; Harris, D. Ahmasi; Blackwell, Richard J.; Bryant, Jeffrey; Boal, Mihir; Binkerd, Evan

    2015-06-01

    BAE Systems' SMART (Stacked Modular Architecture High-Resolution Thermal) Chip Camera provides very compact long-wave infrared (LWIR) solutions by combining a 12 μm wafer-level packaged focal plane array (FPA) with multichip-stack, application-specific integrated circuit (ASIC) and wafer-level optics. The key innovations that enabled this include a single-layer 12 μm pixel bolometer design and robust fabrication process, as well as wafer-level lid packaging. We used advanced packaging techniques to achieve an extremely small-form-factor camera, with a complete volume of 2.9 cm3 and a thermal core weight of 5.1g. The SMART Chip Camera supports up to 60 Hz frame rates, and requires less than 500 mW of power. This work has been supported by the Defense Advanced Research Projects Agency's (DARPA) Low Cost Thermal Imager - Manufacturing (LCTI-M) program, and BAE Systems' internal research and development investment.

  3. Infrared stereo camera for human machine interface

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Chenault, David

    2012-06-01

    Improved situational awareness results not only from improved performance of imaging hardware, but also when the operator and human factors are considered. Situational awareness for IR imaging systems frequently depends on the contrast available. A significant improvement in effective contrast for the operator can result when depth perception is added to the display of IR scenes. Depth perception through flat panel 3D displays are now possible due to the number of 3D displays entering the consumer market. Such displays require appropriate and human friendly stereo IR video input in order to be effective in the dynamic military environment. We report on a stereo IR camera that has been developed for integration on to an unmanned ground vehicle (UGV). The camera has auto-convergence capability that significantly reduces ill effects due to image doubling, minimizes focus-convergence mismatch, and eliminates the need for the operator to manually adjust camera properties. Discussion of the size, weight, and power requirements as well as integration onto the robot platform will be given along with description of the stand alone operation.

  4. Refocusing distance of a standard plenoptic camera.

    PubMed

    Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias

    2016-09-19

    Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs. PMID:27661891

  5. Direct calibration methodology for stereo cameras

    NASA Astrophysics Data System (ADS)

    Do, Yongtae; Yoo, Seog-Hwan; Lee, Dae-Sik

    1998-10-01

    This paper describes a new technique for stereoscopic 3D position measurement. By defining stereo cameras as a system for image-to-world mapping, the mapping function is determined. The direct representation of 3D coordinates of a world point with corresponding stereo image coordinates is derived using the pin-hole model. One camera frame is related to the other before being related to the world frame so that the stereo itself rather than each camera can be directly related with the 3D world. The equations obtained are simple, straightforward, and closed-form. However, since the nonlinearity of actual imaging system is not considered, high accuracy is difficult to be expected when the equation is employed for 3D measurements. For tackling this problem a multilayer feedforward neural network trained by back- propagation algorithm is used. The network played the role of fine correction satisfactorily using its function learning capability after rough mapping by the linear equations in our experiment.

  6. SPECT detectors: the Anger Camera and beyond

    NASA Astrophysics Data System (ADS)

    Peterson, Todd E.; Furenlid, Lars R.

    2011-09-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic.

  7. Refocusing distance of a standard plenoptic camera.

    PubMed

    Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias

    2016-09-19

    Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs.

  8. SPECT detectors: the Anger Camera and beyond

    PubMed Central

    Peterson, Todd E.; Furenlid, Lars R.

    2011-01-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous NaI(Tl) scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904

  9. Optimum color filters for CCD digital cameras.

    PubMed

    Engelhardt, K; Seitz, P

    1993-06-01

    A procedure for the definition of optimum spectral transmission curves for any solid-state (especially silicon-based CCD) color camera is presented. The design of the target curves is based on computer simulation of the camera system and on the use of test colors with known spectral reflectances. Color errors are measured in a uniform color space (CIELUV) and by application of the Commission Internationale de l'Eclairage color difference formula. Dielectric filter stacks were designed by simulated thermal annealing, and a stripe filter pattern was fabricated with transmission properties close to the specifications. Optimization of the color transformation minimizes the residual average color error and an average color error of ~1 just noticeable difference should be feasible. This means that color differences on a side-to-side comparison of original and reproduced color are practically imperceptible. In addition, electrical cross talk within the solid-state imager can be compensated by adapting the color matrixing coefficients. The theoretical findings of this work were employed for the design and fabrication of a high-resolution digital CCD color camera with high calorimetric accuracy. PMID:20829908

  10. YAP multi-crystal gamma camera prototype

    SciTech Connect

    Blazek, K.; Maly, P.; Notaristefani, F. de |; Pani, R.; Pellegrini, R.; Pergola, A.; Scopinaro, F.; Soluri, A.

    1995-10-01

    The Anger camera principle has shown a practical limit of a few millimeters spatial resolution. To overcome this limit, a new gamma camera prototype has been developed, based on a position-sensitive photomultiplier tue (PSPMT) coupled with a new scintillation crystal. The Hamamatsu R2486 PSPMT is a 76-mm diameter photomultiplier tube in which the electrons produced in the conventional bi-alkali photocathode are multiplied by proximity mesh dynodes and form a charge cloud around the original coordinates of the light photon striking the photocathode. A crossed wire anode array collects the charge and detects the original position. The intrinsic spatial resolution of PSPMT is better than 0.3 mm. The scintillation crystal consists of yttrium aluminum perovskit (YAP:Ce or YAlO{sub 3}:Ce). This crystal has a light efficient of about 38% relative to NaI, no hygroscopicity and a good gamma radiation absorption. To match the characteristics of the PSPMT, a special crystal assembly was produced by the Preciosa Company, consisting of a bundle of YAP:Ce pillars where single crystals have 0.6 {times} 0.6 mm{sup 2} cross section and 3 mm to 18 mm length. Preliminary results from such gamma camera prototypes show spatial resolution values ranging between 0.7 mm and 1 mm with an intrinsic detection efficiency of 37 {divided_by} 65% for 140 keV gamma energy.

  11. Reconditioning of Cassini Narrow-Angle Camera

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These five images of single stars, taken at different times with the narrow-angle camera on NASA's Cassini spacecraft, show the effects of haze collecting on the camera's optics, then successful removal of the haze by warming treatments.

    The image on the left was taken on May 25, 2001, before the haze problem occurred. It shows a star named HD339457.

    The second image from left, taken May 30, 2001, shows the effect of haze that collected on the optics when the camera cooled back down after a routine-maintenance heating to 30 degrees Celsius (86 degrees Fahrenheit). The star is Maia, one of the Pleiades.

    The third image was taken on October 26, 2001, after a weeklong decontamination treatment at minus 7 C (19 F). The star is Spica.

    The fourth image was taken of Spica January 30, 2002, after a weeklong decontamination treatment at 4 C (39 F).

    The final image, also of Spica, was taken July 9, 2002, following three additional decontamination treatments at 4 C (39 F) for two months, one month, then another month.

    Cassini, on its way toward arrival at Saturn in 2004, is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini mission for NASA's Office of Space Science, Washington, D.C.

  12. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  13. Single eye or camera with depth perception

    NASA Astrophysics Data System (ADS)

    Kornreich, Philipp; Farell, Bart

    2012-10-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by a short photoconducting lossi lightguide section at each pixel. The eye or camera lens selects the object point who's range is to be determined at the pixel. Light arriving at an image point trough a convex lens adds constructively only if it comes from the object point that is in focus at this pixel.. Light waves from all other object points cancel. Thus the lightguide at this pixel receives light from one object point only. This light signal has a phase component proportional to the range. The light intensity modes and thus the photocurrent in the lightguides shift in response to the phase of the incoming light. Contacts along the length of the lightguide collect the photocurrent signal containing the range information. Applications of this camera include autonomous vehicle navigation and robotic vision. An interesting application is as part of a crude teleportation system consisting of this camera and a three dimensional printer at a remote location.

  14. SPECT detectors: the Anger Camera and beyond.

    PubMed

    Peterson, Todd E; Furenlid, Lars R

    2011-09-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904

  15. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  16. The YAP camera: An accurate gamma camera particularly suitable for new radiopharmaceuticals research

    SciTech Connect

    Vittori, F.; Notaristefani, F. de; Malatesta, T.

    1997-02-01

    The YAP Camera represents a refined research instrument in nuclear medicine and pharmacology because of its overall detection efficiency comparable to an Anger Camera and its submillimeter intrinsic spatial resolution. The YAP Camera consists of a YAP : Ce multicrystal matrix, whose pillars dimensions are 0.6 mm x 0.6 mm x 10 mm, optically coupled with a position sensitive PMT Hamamatsu R2486 and furnished with a parallel hole lead collimator 20 mm thick with holes diameter of 0.5 mm and septa of 0.15 mm. At this stage it is a miniature camera, with a field of view (FOV) of 40 mm x 40 mm and a total spatial resolution of 1.0--1.2 mm, currently used for radiotracers studies on small biological specimens. A detailed analysis of the detector position linearity and energy responses are presented in this work. The intrinsic spatial resolution is studied with three different single hole collimators (1.0, 0.3, and 0.2 mm), and a theoretical equation is presented. Three different parallel hole collimators are tested to evaluate the optimal hole and septa dimensions. Finally, it is demonstrated that two correction procedures are capable of recovering the image spatial homogeneity and of removing the statistical noise. Some phantom images show the importance of the small-field YAP Camera in the radiopharmacological research.

  17. Progress on Modeling of Ultrafast X-Ray Streak Cameras

    SciTech Connect

    Huang, G.; Byrd, J.M.; Feng, J.; Qiang, J.; Wang, W.

    2007-06-22

    Streak cameras continue to be useful tools for studying phenomena on the picoseconds time scale. We have employed accelerator modeling tools to understand and possibly improve the time resolution of present and future streak cameras. This effort has resulted in an end-to-end model of the camera. This model has contributed to the recent measurement of 230 fsec (FWHM) resolution measured at 266 nm in the Advanced Light Source Streak Camera Laboratory. We describe results from this model that show agreement with the experiments. We also extrapolate the performance of this camera including several possible improvements.

  18. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  19. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  20. Initial laboratory evaluation of color video cameras: Phase 2

    SciTech Connect

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  1. High-dimensional camera shake removal with given depth map.

    PubMed

    Yue, Tao; Suo, Jinli; Dai, Qionghai

    2014-06-01

    Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.

  2. Qualification Tests of Micro-camera Modules for Space Applications

    NASA Astrophysics Data System (ADS)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  3. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  4. Simple method for calibrating omnidirectional stereo with multiple cameras

    NASA Astrophysics Data System (ADS)

    Ha, Jong-Eun; Choi, I.-Sak

    2011-04-01

    Cameras can give useful information for the autonomous navigation of a mobile robot. Typically, one or two cameras are used for this task. Recently, an omnidirectional stereo vision system that can cover the whole surrounding environment of a mobile robot is adopted. They usually adopt a mirror that cannot offer uniform spatial resolution. In this paper, we deal with an omnidirectional stereo system which consists of eight cameras where each two vertical cameras constitute one stereo system. Camera calibration is the first necessary step to obtain 3D information. Calibration using a planar pattern requires many images acquired under different poses so it is a tedious step to calibrate all eight cameras. In this paper, we present a simple calibration procedure using a cubic-type calibration structure that surrounds the omnidirectional stereo system. We can calibrate all the cameras on an omnidirectional stereo system in just one shot.

  5. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate

  6. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement

  7. AMICA (Antarctic Multiband Infrared CAmera) project

    NASA Astrophysics Data System (ADS)

    Dolci, Mauro; Straniero, Oscar; Valentini, Gaetano; Di Rico, Gianluca; Ragni, Maurizio; Pelusi, Danilo; Di Varano, Igor; Giuliani, Croce; Di Cianno, Amico; Valentini, Angelo; Corcione, Leonardo; Bortoletto, Favio; D'Alessandro, Maurizio; Bonoli, Carlotta; Giro, Enrico; Fantinel, Daniela; Magrin, Demetrio; Zerbi, Filippo M.; Riva, Alberto; Molinari, Emilio; Conconi, Paolo; De Caprio, Vincenzo; Busso, Maurizio; Tosti, Gino; Nucciarelli, Giuliano; Roncella, Fabio; Abia, Carlos

    2006-06-01

    The Antarctic Plateau offers unique opportunities for ground-based Infrared Astronomy. AMICA (Antarctic Multiband Infrared CAmera) is an instrument designed to perform astronomical imaging from Dome-C in the near- (1 - 5 μm) and mid- (5 - 27 μm) infrared wavelength regions. The camera consists of two channels, equipped with a Raytheon InSb 256 array detector and a DRS MF-128 Si:As IBC array detector, cryocooled at 35 and 7 K respectively. Cryogenic devices will move a filter wheel and a sliding mirror, used to feed alternatively the two detectors. Fast control and readout, synchronized with the chopping secondary mirror of the telescope, will be required because of the large background expected at these wavelengths, especially beyond 10 μm. An environmental control system is needed to ensure the correct start-up, shut-down and housekeeping of the camera. The main technical challenge is represented by the extreme environmental conditions of Dome C (T about -90 °C, p around 640 mbar) and the need for a complete automatization of the overall system. AMICA will be mounted at the Nasmyth focus of the 80 cm IRAIT telescope and will perform survey-mode automatic observations of selected regions of the Southern sky. The first goal will be a direct estimate of the observational quality of this new highly promising site for Infrared Astronomy. In addition, IRAIT, equipped with AMICA, is expected to provide a significant improvement in the knowledge of fundamental astrophysical processes, such as the late stages of stellar evolution (especially AGB and post-AGB stars) and the star formation.

  8. OSIRIS The Scientific Camera System Onboard Rosetta

    NASA Astrophysics Data System (ADS)

    Keller, H. U.; Barbieri, C.; Lamy, P.; Rickman, H.; Rodrigo, R.; Wenzel, K.-P.; Sierks, H.; A'Hearn, M. F.; Angrilli, F.; Angulo, M.; Bailey, M. E.; Barthol, P.; Barucci, M. A.; Bertaux, J.-L.; Bianchini, G.; Boit, J.-L.; Brown, V.; Burns, J. A.; Büttner, I.; Castro, J. M.; Cremonese, G.; Curdt, W.; da Deppo, V.; Debei, S.; de Cecco, M.; Dohlen, K.; Fornasier, S.; Fulle, M.; Germerott, D.; Gliem, F.; Guizzo, G. P.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Koschny, D.; Kramm, J. R.; Kührt, E.; Küppers, M.; Lara, L. M.; Llebaria, A.; López, A.; López-Jimenez, A.; López-Moreno, J.; Meller, R.; Michalik, H.; Michelena, M. D.; Müller, R.; Naletto, G.; Origné, A.; Parzianello, G.; Pertile, M.; Quintana, C.; Ragazzoni, R.; Ramous, P.; Reiche, K.-U.; Reina, M.; Rodríguez, J.; Rousset, G.; Sabau, L.; Sanz, A.; Sivan, J.-P.; Stöckner, K.; Tabero, J.; Telljohann, U.; Thomas, N.; Timon, V.; Tomasch, G.; Wittrock, T.; Zaccariotto, M.

    2007-02-01

    The Optical, Spectroscopic, and Infrared Remote Imaging System OSIRIS is the scientific camera system onboard the Rosetta spacecraft (Figure 1). The advanced high performance imaging system will be pivotal for the success of the Rosetta mission. OSIRIS will detect 67P/Churyumov-Gerasimenko from a distance of more than 106 km, characterise the comet shape and volume, its rotational state and find a suitable landing spot for Philae, the Rosetta lander. OSIRIS will observe the nucleus, its activity and surroundings down to a scale of ~2 cm px-1. The observations will begin well before the onset of cometary activity and will extend over months until the comet reaches perihelion. During the rendezvous episode of the Rosetta mission, OSIRIS will provide key information about the nature of cometary nuclei and reveal the physics of cometary activity that leads to the gas and dust coma. OSIRIS comprises a high resolution Narrow Angle Camera (NAC) unit and a Wide Angle Camera (WAC) unit accompanied by three electronics boxes. The NAC is designed to obtain high resolution images of the surface of comet 67P/Churyumov-Gerasimenko through 12 discrete filters over the wavelength range 250 1000 nm at an angular resolution of 18.6 μrad px-1. The WAC is optimised to provide images of the near-nucleus environment in 14 discrete filters at an angular resolution of 101 μrad px-1. The two units use identical shutter, filter wheel, front door, and detector systems. They are operated by a common Data Processing Unit. The OSIRIS instrument has a total mass of 35 kg and is provided by institutes from six European countries.

  9. Precision Multiband Photometry with a DSLR Camera

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Bakos, G. Á.; Penev, K.; Csubry, Z.; Hartman, J. D.; Bhatti, W.; de Val-Borro, M.

    2016-03-01

    Ground-based exoplanet surveys such as SuperWASP, HAT Network of Telescopes (HATNet), and KELT have discovered close to two hundred transiting extrasolar planets in the past several years. The strategy of these surveys is to look at a large field of view and measure the brightnesses of its bright stars to around half a percent per point precision, which is adequate for detecting hot Jupiters. Typically, these surveys use CCD detectors to achieve high precision photometry. These CCDS, however, are expensive relative to other consumer-grade optical imaging devices, such as digital single-lens reflex cameras (DSLRs). We look at the possibility of using a DSLR camera for precision photometry. Specifically, we used a Canon EOS 60D camera that records light in three colors simultaneously. The DSLR was integrated into the HATNet survey and collected observations for a month, after which photometry was extracted for 6600 stars in a selected stellar field. We found that the DSLR achieves a best-case median absolute deviation of 4.6 mmag per 180 s exposure when the DSLR color channels are combined, and 1000 stars are measured to better than 10 mmag (1%). Also, we achieve 10 mmag or better photometry in the individual colors. This is good enough to detect transiting hot Jupiters. We performed a candidate search on all stars and found four candidates, one of which is KELT-3b, the only known transiting hot Jupiter in our selected field. We conclude that the Canon 60D is a cheap, lightweight device capable of useful photometry in multiple colors.

  10. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  11. Computational cameras for moving iris recognition

    NASA Astrophysics Data System (ADS)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  12. Exact optics - III. Schwarzschild's spectrograph camera revised

    NASA Astrophysics Data System (ADS)

    Willstrop, R. V.

    2004-03-01

    Karl Schwarzschild identified a system of two mirrors, each defined by conic sections, free of third-order spherical aberration, coma and astigmatism, and with a flat focal surface. He considered it impractical, because the field was too restricted. This system was rediscovered as a quadratic approximation to one of Lynden-Bell's `exact optics' designs which have wider fields. Thus the `exact optics' version has a moderate but useful field, with excellent definition, suitable for a spectrograph camera. The mirrors are strongly aspheric in both the Schwarzschild design and the exact optics version.

  13. Palomar Prime-Focus Infrared Camera (PFIRCAM)

    NASA Astrophysics Data System (ADS)

    Jarrett, T. H.; Beichman, C. A.; van Buren, D.; Gautier, N.; Jorquera, C.; Bruce, C.

    1994-03-01

    In a joint effort between engineers and scientists at the Jet Propulsion Laboratory and the California Institute of Technology, a near-infrared (0.8 - 2.6 micrometers) direct imaging system has been developed and integrated into the Caltech Palomar Observatory detector series. The camera system has been tested and operated in a science mode at the prime-focus (f/3.3) of the Hale 5-m Telescope. This paper outlines the system components and performance, including discussion of the detector linearity.

  14. A Pinhole Camera for SPEAR 2

    SciTech Connect

    Safranek, James

    2002-08-23

    A new pinhole camera system has been installed on Spear 2 to measure its vertical emittance. The hard X-rays from a bending magnet source point are imaged on a phosphor screen through a 30 um x 25 um pinhole aperture. The resolution of the system for 8 keV photons is limited to 31.2 um by Fraunhoffer diffraction and is much smaller than the 340 um FWHM vertical beam size corresponding to 0.9% vertical coupling. Measurements are presented. Optimal pinhole location and hole size are discussed.

  15. Zoom camera based on liquid lenses

    NASA Astrophysics Data System (ADS)

    Kuiper, S.; Hendriks, B. H. W.; Suijver, J. F.; Deladi, S.; Helwegen, I.

    2007-01-01

    A 1.7× VGA zoom camera was designed based on two variable-focus liquid lenses and three plastic lenses. The strongly varying curvature of the liquid/liquid interface in the lens makes an achromatic design complicated. Special liquids with a rare combination of refractive index and Abbe number are required to prevent chromatic aberrations for all zoom levels and object positions. A set of acceptable liquids was obtained and used in a prototype that was constructed according to our design. First photos taken with the prototype show a proof of principle.

  16. Fundus camera systems: a comparative analysis

    PubMed Central

    DeHoog, Edward; Schwiegerling, James

    2010-01-01

    Retinal photography requires the use of a complex optical system, called a fundus camera, capable of illuminating and imaging the retina simultaneously. The patent literature shows two design forms but does not provide the specifics necessary for a thorough analysis of the designs to be performed. We have constructed our own designs based on the patent literature in optical design software and compared them for illumination efficiency, image quality, ability to accommodate for patient refractive error, and manufacturing tolerances, a comparison lacking in the existing literature. PMID:19137032

  17. Clinical applications with the HIDAC positron camera

    NASA Astrophysics Data System (ADS)

    Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.

    1988-06-01

    A high density avalanche chamber (HIDAC) positron camera has been used for positron emission tomographic (PET) imaging in three different human studies, including patients presenting with: (I) thyroid diseases (124 cases); (II) clinically suspected malignant tumours of the pharynx or larynx (ENT) region (23 cases); and (III) clinically suspected primary malignant and metastatic tumours of the liver (9 cases, 19 PET scans). The positron emitting radiopharmaceuticals used for the three studies were Na 124I (4.2 d half-life) for the thyroid, 55Co-bleomycin (17.5 h half-life) for the ENT-region and 68Ga-colloid (68 min half-life) for the liver. Tomographic imaging was performed: (I) 24 h after oral Na 124I administration to the thyroid patients, (II) 18 h after intraveneous administration of 55Co-bleomycin to the ENT patients and (III) 20 min following the intraveneous injection of 68Ga-colloid to the liver tumour patients. Three different imaging protocols were used with the HIDAC positron camera to perform appropriate tomographic imaging in each patient study. Promising results were obtained in all three studies, particularly in tomographic thyroid imaging, where a significant clinical contribution is made possible for diagnosis and therapy planning by the PET technique. In the other two PET studies encouraging results were obtained for the detection and precise localisation of malignant tumour disease including an estimate of the functional liver volume based on the reticulo-endothelial-system (RES) of the liver, obtained in vivo, and the three-dimensional display of liver PET data using shaded graphics techniques. The clinical significance of the overall results obtained in both the ENT and the liver PET study, however, is still uncertain and the respective role of PET as a new imaging modality in these applications is not yet clearly established. To appreciate the clinical impact made by PET in liver and ENT malignant tumour staging needs further investigation

  18. Online camera-gyroscope autocalibration for cell phones.

    PubMed

    Jia, Chao; Evans, Brian L

    2014-12-01

    The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.

  19. Design of a variable field prototype PET camera

    SciTech Connect

    Wong, W.H.; Uribe, J.; Lu, W.; Hu, G.; Hicks, K.

    1996-06-01

    A prototype PET camera has been designed and is being constructed to test the concept, and develop the engineering design and production methodology for a variable field PET camera. The long term goal of the design is to develop a lower cost, high resolution PET camera. The camera has eight detector heads which form a closely packed octagon detector ring with an average diameter of 44cm for brain/breast and animal model imaging. The heads can be translated radially to a maximum ring diameter of 70cm for whole body imaging. In the larger diameter modes, the camera rotates 45{degree} during imaging. The camera heads can be set to intermediate positions to fit the camera to the subject size to maximize detection sensitivity and sampling uniformity. Special design features for imaging the breast and the axillary metastases have been incorporated. The detector design implemented is the quadrant sharing photomultiplier (PMT) design using circular 19mm PMT. The BGO detector pitch size is 2.7 x 2.7mm. The prototype camera images 27 slices simultaneously with an axial field of view (FOV) of 39mm. The prototype`s limited axial FOV, which is appropriate for testing the camera concept, would be expanded in a next-generation clinical camera implementation. Preliminary simulation studies have been performed to evaluate the resolution, sensitivity, and sampling uniformity.

  20. Time-of-Flight Microwave Camera

    NASA Astrophysics Data System (ADS)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  1. Mars Cameras Make Panoramic Photography a Snap

    NASA Technical Reports Server (NTRS)

    2008-01-01

    If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.

  2. Astronomical observations with an infrared array camera

    SciTech Connect

    Tresch-Fienberg, R.M.

    1985-01-01

    Astronomical observations with an infrared array camera demonstrate that arrays are excellent for high spatial resolution photometric mapping of celestial objects. The author describes a a 16 x 16 pixel array camera system based on a bismuth-doped silicon charge injection device optimized for use in the 8-13 micron atmospheric window. Observing techniques and image processing algorithms that are unique to the use of an array detector are also discussed. Multi-wavelength, 1-2 arcsec resolution images of three different celestial objects are presented. For the galactic center, maps of the infrared color temperature and emission optical depth are derived. The results are consistent with a model in which a low density region with a massive luminosity source at its center is encircled by a ring of gas and dust from which material may be infalling toward the nucleus. Multiple luminosity sources are not required to explain the infrared appearance of the galactic center. Images of Seyfert galaxy NGC 1068 are the first to resolve the infrared structure of the nucleus and show that it is similar to that at optical and radio wavelengths. Infrared emission extended northeast of the nucleus is identified with the radio jet. Combined with optical spectra and charge coupled device images, the new data imply a causal relationship between the Seyfert activity in the nucleus and the starburst in the disk.

  3. The design of aerial camera focusing mechanism

    NASA Astrophysics Data System (ADS)

    Hu, Changchang; Yang, Hongtao; Niu, Haijun

    2015-10-01

    In order to ensure the imaging resolution of aerial camera and compensating defocusing caused by the changing of atmospheric temperature, pressure, oblique photographing distance and other environmental factor [1,2], and to meeting the overall design requirements of the camera for the lower mass and smaller size , the linear focusing mechanism is designed. Through the target surface support, the target surface component is connected with focusing driving mechanism. Make use of precision ball screws, focusing mechanism transforms the input rotary motion of motor into linear motion of the focal plane assembly. Then combined with the form of linear guide restraint movement, the magnetic encoder is adopted to detect the response of displacement. And the closed loop control is adopted to realize accurate focusing. This paper illustrated the design scheme for a focusing mechanism and analyzed its error sources. It has the advantages of light friction and simple transmission chain and reducing the transmission error effectively. And this paper also analyses the target surface by finite element analysis and lightweight design. Proving that the precision of focusing mechanism can achieve higher than 3um, and the focusing range is +/-2mm.

  4. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-01-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598

  5. Color gamma camera system for radiation monitoring

    NASA Astrophysics Data System (ADS)

    Mu, Zhiping; Deng, Jingkang; Wang, Yanfeng

    2000-11-01

    Radiation monitoring systems are desired in many places where radioactive materials are utilized. In this paper, a color gamma camera system developed in Tsinghua University (P.C. China) is reported. The system consist of a compact X - (gamma) ray detector system, a single hole collimator, the scanning mechanism and computer system. The MLEM method is implemented for image reconstruction, which enables one to generate images of high resolution with relatively big aperture. With the associated software, several scanning modes, which work with different speeds and resolutions, are provided and can be selected in the operations. In addition, the system can detect radioactive sources emitting rays of different energies and display them with color images. Experiments were made using Am-241 (59.5 KeV) and Na-22 (511 KeV) to test the performance of the system. The results are presented which show that the resolution of this system can be as high as 1.5 degrees. Furthermore, simulations using Matlab were made to examine the capability of imaging point sources with a small number of counts and imaging distributed sources. Promising results were obtained and are reported. Discussions about camera design and further improvements are given at the end.

  6. Far-infrared cameras for automotive safety

    NASA Astrophysics Data System (ADS)

    Lonnoy, Jacques; Le Guilloux, Yann; Moreira, Raphael

    2005-02-01

    Far Infrared cameras used initially for the driving of military vehicles are slowly coming into the area of commercial (luxury) cars while providing with the FIR imagery a useful assistance for driving at night or in adverse conditions (fog, smoke, ...). However this imagery needs a minimum driver effort as the image understanding is not so natural as the visible or near IR one. A developing field of FIR cameras is ADAS (Advanced Driver Assistance Systems) where FIR processed imagery fused with other sensors data (radar, ...) is providing a driver warning when dangerous situations are occurring. The communication will concentrate on FIR processed imagery for object or obstacles detection on the road or near the road. FIR imagery highlighting hot spots is a powerful detection tool as it provides a good contrast on some of the most common elements of the road scenery (engines, wheels, gas exhaust pipes, pedestrians, 2 wheelers, animals,...). Moreover FIR algorithms are much more robust than visible ones as there is less variability in image contrast with time (day/night, shadows, ...). We based our detection algorithm on one side on the peculiar aspect of vehicles, pedestrians in FIR images and on the other side on the analysis of motion along time, that allows anticipation of future motion. We will show results obtained with FIR processed imagery within the PAROTO project, supported by the French Ministry of Research, that ended in spring 04.

  7. Fast Camera Movies of NSTX Plasmas

    DOE Data Explorer

    Maqueda, Ricky; Wurden, Glenn

    The National Spherical Torus Experiment (NSTX) is an innovative magnetic fusion device that is being used to study the physics principles of spherically shaped plasmas -- hot ionized gases in which nuclear fusion will occur under the appropriate conditions of temperature, density, and confinement in a magnetic field. Fusion is the energy source of the Sun and all the stars. Scientists believe it can provide an inexhaustible, safe, and environmentally attractive source. NSTX was constructed by the Princeton Plasma Physics Laboratory (PPPL) in conjunction with Oak Ridge National Laboratory, Columbia University, and the University of Washington Seattle. The original TIF images recorded by the KODAK digital camera (i.e., "raw data") are available, using the contact information given on the same web page that provides access to these fast camera movies. MPEG clips are organized under the following headings: • Gas Puff Imaging (GPI) diagnostic • GPI experiments • H-modes (longer) • H-modes (short) • Coaxial Helicity Injection experiments More than 100 MPEGS dating back to 1999 are available for public access.

  8. Quality criterion for digital still camera

    NASA Astrophysics Data System (ADS)

    Bezryadin, Sergey

    2007-02-01

    The main quality requirements for a digital still camera are color capturing accuracy, low noise level, and quantum efficiency. Different consumers assign different priorities to the listed parameters, and camera designers need clearly formulated methods for their evaluation. While there are procedures providing noise level and quantum efficiency estimation, there are no effective means for color capturing accuracy estimation. Introduced in this paper criterion allows to fill this gap. Luther-Ives condition for correct color reproduction system became known in the beginning of the last century. However, since no detector system satisfies Luther-Ives condition, there are always stimuli that are distinctly different for an observer, but which detectors are unable to distinguish. To estimate conformity of a detector set with Luther-Ives condition and calculate a measure of discrepancy, an angle between detector sensor sensitivity and Cohen's Fundamental Color Space may be used. In this paper, the divergence angle is calculated for some typical CCD sensors and a demonstration provided on how this angle might be reduced with a corrective filter. In addition, it is shown that with a specific corrective filter Foveon sensors turn into a detector system with a good Luther-Ives condition compliance.

  9. The infrared camera onboard JEM-EUSO

    NASA Astrophysics Data System (ADS)

    Adams, J. H.; Ahmad, S.; Albert, J.-N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J.-N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J.-S.; Kim, S.-W.; Kim, S.-W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J.; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.

    2015-11-01

    The Extreme Universe Space Observatory on the Japanese Experiment Module (JEM-EUSO) on board the International Space Station (ISS) is the first space-based mission worldwide in the field of Ultra High-Energy Cosmic Rays (UHECR). For UHECR experiments, the atmosphere is not only the showering calorimeter for the primary cosmic rays, it is an essential part of the readout system, as well. Moreover, the atmosphere must be calibrated and has to be considered as input for the analysis of the fluorescence signals. Therefore, the JEM-EUSO Space Observatory is implementing an Atmospheric Monitoring System (AMS) that will include an IR-Camera and a LIDAR. The AMS Infrared Camera is an infrared, wide FoV, imaging system designed to provide the cloud coverage along the JEM-EUSO track and the cloud top height to properly achieve the UHECR reconstruction in cloudy conditions. In this paper, an updated preliminary design status, the results from the calibration tests of the first prototype, the simulation of the instrument, and preliminary cloud top height retrieval algorithms are presented.

  10. Focal Plane Metrology for the LSST Camera

    SciTech Connect

    A Rasmussen, Andrew P.; Hale, Layton; Kim, Peter; Lee, Eric; Perl, Martin; Schindler, Rafe; Takacs, Peter; Thurston, Timothy; /SLAC

    2007-01-10

    Meeting the science goals for the Large Synoptic Survey Telescope (LSST) translates into a demanding set of imaging performance requirements for the optical system over a wide (3.5{sup o}) field of view. In turn, meeting those imaging requirements necessitates maintaining precise control of the focal plane surface (10 {micro}m P-V) over the entire field of view (640 mm diameter) at the operating temperature (T {approx} -100 C) and over the operational elevation angle range. We briefly describe the hierarchical design approach for the LSST Camera focal plane and the baseline design for assembling the flat focal plane at room temperature. Preliminary results of gravity load and thermal distortion calculations are provided, and early metrological verification of candidate materials under cold thermal conditions are presented. A detailed, generalized method for stitching together sparse metrology data originating from differential, non-contact metrological data acquisition spanning multiple (non-continuous) sensor surfaces making up the focal plane, is described and demonstrated. Finally, we describe some in situ alignment verification alternatives, some of which may be integrated into the camera's focal plane.

  11. Time-of-Flight Microwave Camera

    PubMed Central

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-01-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598

  12. X-ray Pinhole Camera Measurements

    SciTech Connect

    Nelson, D. S.; Berninger, M. J.; Flores, P. A.; Good, D. E.; Henderson, D. J.; Hogge, K. W.; Huber, S. R.; Lutz, S. S.; Mitchell, S. E.; Howe, R. A.; Mitton, C. V.; Molina, I.; Bozman, D. R.; Cordova, S. R.; Mitchell, D. R.; Oliver, B. V.; Ormond, E. C.

    2013-07-01

    The development of the rod pinch diode [1] has led to high-resolution radiography for dynamic events such as explosive tests. Rod pinch diodes use a small diameter anode rod, which extends through the aperture of a cathode plate. Electrons borne off the aperture surface can self-insulate and pinch onto the tip of the rod, creating an intense, small x-ray source (Primary Pinch). This source has been utilized as the main diagnostic on numerous experiments that include high-value, single-shot events. In such applications there is an emphasis on machine reliability, x-ray reproducibility, and x-ray quality [2]. In tests with the baseline rod pinch diode, we have observed that an additional pinch (Secondary Pinch) occurs at the interface near the anode rod and the rod holder. This suggests that stray electrons exist that are not associated with the Primary Pinch. In this paper we present measurements on both pinches using an x-ray pinhole camera. The camera is placed downstream of the Primary Pinch at an angle of 60° with respect to the diode centerline. This diagnostic will be employed to diagnose x-ray reproducibility and quality. In addition, we will investigate the performance of hybrid diodes relating to the formation of the Primary and Secondary Pinches.

  13. The Dark Energy Survey Camera (DECam)

    SciTech Connect

    Diehl, H.Thomas; /Fermilab

    2011-09-09

    The Dark Energy Survey (DES) is a next generation optical survey aimed at understanding the expansion rate of the Universe using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon acoustic oscillations, and Type Ia supernovae. To perform the survey, the DES Collaboration is building the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera that will be mounted at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory. CCD production has finished, yielding roughly twice the required 62 2k x 4k detectors. The construction of DECam is nearly finished. Integration and commissioning on a 'telescope simulator' of the major hardware and software components, except for the optics, recently concluded at Fermilab. Final assembly of the optical corrector has started at University College, London. Some components have already been received at CTIO. 'First-light' will be sometime in 2012. This oral presentation concentrates on the technical challenges involved in building DECam (and how we overcame them), and the present status of the instrument.

  14. Electronic Still Camera Project on STS-48

    NASA Technical Reports Server (NTRS)

    1991-01-01

    On behalf of NASA, the Office of Commercial Programs (OCP) has signed a Technical Exchange Agreement (TEA) with Autometric, Inc. (Autometric) of Alexandria, Virginia. The purpose of this agreement is to evaluate and analyze a high-resolution Electronic Still Camera (ESC) for potential commercial applications. During the mission, Autometric will provide unique photo analysis and hard-copy production. Once the mission is complete, Autometric will furnish NASA with an analysis of the ESC s capabilities. Electronic still photography is a developing technology providing the means by which a hand held camera electronically captures and produces a digital image with resolution approaching film quality. The digital image, stored on removable hard disks or small optical disks, can be converted to a format suitable for downlink transmission, or it can be enhanced using image processing software. The on-orbit ability to enhance or annotate high-resolution images and then downlink these images in real-time will greatly improve Space Shuttle and Space Station capabilities in Earth observations and on-board photo documentation.

  15. FIDO Rover Retracted Arm and Camera

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The Field Integrated Design and Operations (FIDO) rover extends the large mast that carries its panoramic camera. The FIDO is being used in ongoing NASA field tests to simulate driving conditions on Mars. FIDO is controlled from the mission control room at JPL's Planetary Robotics Laboratory in Pasadena. FIDO uses a robot arm to manipulate science instruments and it has a new mini-corer or drill to extract and cache rock samples. Several camera systems onboard allow the rover to collect science and navigation images by remote-control. The rover is about the size of a coffee table and weighs as much as a St. Bernard, about 70 kilograms (150 pounds). It is approximately 85 centimeters (about 33 inches) wide, 105 centimeters (41 inches) long, and 55 centimeters (22 inches) high. The rover moves up to 300 meters an hour (less than a mile per hour) over smooth terrain, using its onboard stereo vision systems to detect and avoid obstacles as it travels 'on-the-fly.' During these tests, FIDO is powered by both solar panels that cover the top of the rover and by replaceable, rechargeable batteries.

  16. Multi-band infrared camera systems

    NASA Astrophysics Data System (ADS)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  17. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    ERIC Educational Resources Information Center

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  18. From the Pinhole Camera to the Shape of a Lens: The Camera-Obscura Reloaded

    ERIC Educational Resources Information Center

    Ziegler, Max; Priemer, Burkhard

    2015-01-01

    We demonstrate how the form of a plano-convex lens and a derivation of the thin lens equation can be understood through simple physical considerations. The basic principle is the extension of the pinhole camera using additional holes. The resulting images are brought into coincidence through the deflection of light with an arrangement of prisms.…

  19. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  20. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  1. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground

  2. Localization and trajectory reconstruction in surveillance cameras with nonoverlapping views.

    PubMed

    Pflugfelder, Roman; Bischof, Horst

    2010-04-01

    This paper proposes a method that localizes two surveillance cameras and simultaneously reconstructs object trajectories in 3D space. The method is an extension of the Direct Reference Plane method, which formulates the localization and the reconstruction as a system of linear equations that is globally solvable by Singular Value Decomposition. The method's assumptions are static synchronized cameras, smooth trajectories, known camera internal parameters, and the rotation between the cameras in a world coordinate system. The paper describes the method in the context of self-calibrating cameras, where the internal parameters and the rotation can be jointly obtained assuming a man-made scene with orthogonal structures. Experiments with synthetic and real--image data show that the method can recover the camera centers with an error less than half a meter even in the presence of a 4 meter gap between the fields of view. PMID:20224125

  3. Impact of CCD camera SNR on polarimetric accuracy.

    PubMed

    Chen, Zhenyue; Wang, Xia; Pacheco, Shaun; Liang, Rongguang

    2014-11-10

    A comprehensive charge-coupled device (CCD) camera noise model is employed to study the impact of CCD camera signal-to-noise ratio (SNR) on polarimetric accuracy. The study shows that the standard deviations of the measured degree of linear polarization (DoLP) and angle of linear polarization (AoLP) are mainly dependent on the camera SNR. With increase in the camera SNR, both the measurement errors and the standard deviations caused by the CCD camera noise decrease. When the DoLP of the incident light is smaller than 0.1, the camera SNR should be at least 75 to achieve a measurement error of less than 0.01. When the input DoLP is larger than 0.5, a SNR of 15 is sufficient to achieve the same measurement accuracy. An experiment is carried out to verify the simulation results.

  4. Localization and trajectory reconstruction in surveillance cameras with nonoverlapping views.

    PubMed

    Pflugfelder, Roman; Bischof, Horst

    2010-04-01

    This paper proposes a method that localizes two surveillance cameras and simultaneously reconstructs object trajectories in 3D space. The method is an extension of the Direct Reference Plane method, which formulates the localization and the reconstruction as a system of linear equations that is globally solvable by Singular Value Decomposition. The method's assumptions are static synchronized cameras, smooth trajectories, known camera internal parameters, and the rotation between the cameras in a world coordinate system. The paper describes the method in the context of self-calibrating cameras, where the internal parameters and the rotation can be jointly obtained assuming a man-made scene with orthogonal structures. Experiments with synthetic and real--image data show that the method can recover the camera centers with an error less than half a meter even in the presence of a 4 meter gap between the fields of view.

  5. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  6. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  7. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  8. A wide-angle camera module for disposable endoscopy

    NASA Astrophysics Data System (ADS)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  9. Reflectance and illuminant estimation for digital cameras

    NASA Astrophysics Data System (ADS)

    Dicarlo, Jeffrey Michael

    Several important problems in color imaging can be traced to differences in how cameras and humans sample the spectral properties of light. Color processing within the imaging pipeline, loosely referred to as color correction, transforms the sampled camera responses to a form that matches the human responses. The accuracy of the color correction transformation is limited for two reasons. First, the human visual system and most color acquisition devices critically undersample the spectral information, making the differences in their sampling functions quite significant. Second, the human visual system derives a relatively constant surface color appearance despite variations in the illuminant, complicating color correction with the need to estimate the illuminant. Assuming complete knowledge of the illuminant, we formulate color correction as an input-referred estimation problem. In particular, we analyze how a small number of camera measurements can be used to estimate a complete spectral surface reflectance function. We introduce conventional linear color transformations, and then extend these transformations using forms of local linear regression that we refer to as submanifold estimation methods. These methods are based on the observation that for many data sets the deviations between the signal and the linear estimate is systematic; submanifold methods incorporate knowledge of these systematic deviations to improve upon linear estimation methods. We describe the geometric intuition of these methods and evaluate the submanifold method on printed material data and hyperspectral image data. Next, we discard the assumption of complete knowledge of the illuminant and analyze a technique to estimate the illuminant. Conventional algorithms rely on statistical assumptions about the scene properties (surface reflectance functions and geometry) to estimate the ambient illuminant. We introduce a new illuminant estimation paradigm that uses an active imaging method to

  10. The new camera calibration system at the US Geological Survey

    USGS Publications Warehouse

    Light, D.L.

    1992-01-01

    Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author

  11. Interior detail of structural elements section; camera facing east. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior detail of structural elements section; camera facing east. - Mare Island Naval Shipyard, Supply Building, Walnut Avenue, southeast corner of Walnut Avenue & Fifth Street, Vallejo, Solano County, CA

  12. Sensing driver awareness by combining fisheye camera and Kinect

    NASA Astrophysics Data System (ADS)

    Wuhe, Z.; Lei, Z.; Ning, D.

    2014-11-01

    In this paper, we propose a Driver's Awareness Catching System to sense the driver's awareness. The system consists of a fisheye camera and a Kinect. The Kinect mounted inside vehicle is used to recognize and locate the 3D face of the driver. The fisheye camera mounted outside vehicle is used to monitor the road. The relative pose between two cameras is calibrated via a state-of-the-art method for calibrating cameras with non-overlapping field of view. The camera system works in this way: First, the SDK of Kinect released by Microsoft is used to tracking driver's face and capture eye's location together with sight direction. Secondly, the eye's location and the sight direction are transformed to the coordinate system of fisheye camera. Thirdly, corresponding view field is extracted from fisheye image. As there is a small displacement between driver's eyes and the optical center of fisheye camera, it will lead to a view angle deviation. Finally, we did a systematic analysis of the error distribution by numerical simulation and proved the feasibility of our camera system. On the other hand, we realized this camera system and achieved desired effect in realworld experiment.

  13. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  14. Camera traps can be heard and seen by animals.

    PubMed

    Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  15. Contextual view of building 733; camera facing southeast. Mare ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Contextual view of building 733; camera facing southeast. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA

  16. Interior view of second floor sleeping area; camera facing south. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior view of second floor sleeping area; camera facing south. - Mare Island Naval Shipyard, Marine Barracks, Cedar Avenue, west side between Twelfth & Fourteenth Streets, Vallejo, Solano County, CA

  17. Improved Tracking of Targets by Cameras on a Mars Rover

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert

    2007-01-01

    A paper describes a method devised to increase the robustness and accuracy of tracking of targets by means of three stereoscopic pairs of video cameras on a Mars-rover-type exploratory robotic vehicle. Two of the camera pairs are mounted on a mast that can be adjusted in pan and tilt; the third camera pair is mounted on the main vehicle body. Elements of the method include a mast calibration, a camera-pointing algorithm, and a purely geometric technique for handing off tracking between different camera pairs at critical distances as the rover approaches a target of interest. The mast calibration is an extension of camera calibration in which the camera images of calibration targets at known positions are collected at various pan and tilt angles. In the camerapointing algorithm, pan and tilt angles are computed by a closed-form, non-iterative solution of inverse kinematics of the mast combined with mathematical models of the cameras. The purely geometric camera-handoff technique involves the use of stereoscopic views of a target of interest in conjunction with the mast calibration.

  18. Metrovisionlab: A Matlab Tool for Learning Vision Camera Calibration

    NASA Astrophysics Data System (ADS)

    Pastor, J. J.; Santolaria, J.; Samper, D.; Aguilar, J. J.

    2009-11-01

    This paper describes the Metrovisionlab computer application implemented as a toolbox for the Matlab program. The application: 1) simulates a virtual camera, providing a simple and visual understanding of how the various characteristics of a camera influence the image that it captures; 2) generates the coordinates of synthetic calibration points, both in the world reference system and the image reference system, commonly used in camera calibration; and 3) can calibrate with the most important and widely-used methods in the area of vision cameras, using coplanar (2D) or non-coplanar (3D) calibration points.

  19. Interior detail of tower space; camera facing southwest. Mare ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior detail of tower space; camera facing southwest. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA

  20. Oblique view of southeast corner; camera facing northwest. Mare ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Oblique view of southeast corner; camera facing northwest. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA

  1. VIEW OF EAST ELEVATION; CAMERA FACING WEST Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF EAST ELEVATION; CAMERA FACING WEST - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  2. VIEW OF SOUTH ELEVATION; CAMERA FACING NORTH Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF SOUTH ELEVATION; CAMERA FACING NORTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  3. VIEW OF WEST ELEVATION: CAMERA FACING NORTHEAST Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF WEST ELEVATION: CAMERA FACING NORTHEAST - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  4. VIEW OF NORTH ELEVATION; CAMERA FACING SOUTH Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF NORTH ELEVATION; CAMERA FACING SOUTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  5. 19. Lower Mezzanine betting area. Camera pointed WNW from stairsteps ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    19. Lower Mezzanine betting area. Camera pointed WNW from stairsteps leading up to original Clubhouse. (July 1993) - Longacres, Clubhouse & Additions, 1621 Southwest Sixteenth Street, Renton, King County, WA

  6. Design and Field Test of a Galvanometer Deflected Streak Camera

    SciTech Connect

    Lai, C C; Goosman, D R; Wade, J T; Avara, R

    2002-11-08

    We have developed a compact fieldable optically-deflected streak camera first reported in the 20th HSPP Congress. Using a triggerable galvanometer that scans the optical signal, the imaging and streaking function is an all-optical process without incurring any photon-electron-photon conversion or photoelectronic deflection. As such, the achievable imaging quality is limited mainly only by optical design, rather than by multiple conversions of signal carrier and high voltage electron-optics effect. All core elements of the camera are packaged into a 12 inch x 24 inch footprint box, a size similar to that of a conventional electronic streak camera. At LLNL's Site-300 Test Site, we have conducted a Fabry-Perot interferometer measurement of fast object velocity using this all-optical camera side-by-side with an intensified electronic streak camera. These two cameras are configured as two independent instruments for recording synchronously each branch of the 50/50 splits from one incoming signal. Given the same signal characteristics, the test result has undisputedly demonstrated superior imaging performance for the all-optical streak camera. It produces higher signal sensitivity, wider linear dynamic range, better spatial contrast, finer temporal resolution, and larger data capacity as compared with that of the electronic counterpart. The camera had also demonstrated its structural robustness and functional consistence to be well compatible with field environment. This paper presents the camera design and the test results in both pictorial records and post-process graphic summaries.

  7. Camera Traps Can Be Heard and Seen by Animals

    PubMed Central

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  8. 360 deg Camera Head for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.

    2012-01-01

    The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.

  9. View of main facade (southwest side), camera facing northeast ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of main facade (southwest side), camera facing northeast - Golden Gate International Exposition, Hall of Transportation, 440 California Avenue, Treasure Island, San Francisco, San Francisco County, CA

  10. Solid-state framing camera with multiple time frames

    SciTech Connect

    Baker, K. L.; Stewart, R. E.; Steele, P. T.; Vernon, S. P.; Hsing, W. W.; Remington, B. A.

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  11. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    PubMed

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  12. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  13. Detail of towers at southwest corner; camera facing northeast. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail of towers at southwest corner; camera facing northeast. - Mare Island Naval Shipyard, Hospital Wards, Cedar Avenue, eat side between Fourteenth Avenue & Cossey Street, Vallejo, Solano County, CA

  14. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  15. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    PubMed

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  16. High Performance Imaging Streak Camera for the National Ignition Facility

    SciTech Connect

    Opachich, Y. P.; Kalantar, D.; MacPhee, A.; Holder, J.; Kimbrough, J.; Bell, P. M.; Bradley, D.; Hatch, B.; Brown, C.; Landen, O.; Perfect, B. H.; Guidry, B.; Mead, A.; Charest, M.; Palmer, N.; Homoelle, D.; Browning, D.; Silbernagel, C.; Brienza-Larsen, G.; Griffin, M.; Lee, J. J.; Haugh, M. J.

    2012-01-01

    An x-ray streak camera platform has been characterized and implemented for use at the National Ignition Facility. The camera has been modified to meet the experiment requirements of the National Ignition Campaign and to perform reliably in conditions that produce high EMI. A train of temporal UV timing markers has been added to the diagnostic in order to calibrate the temporal axis of the instrument and the detector efficiency of the streak camera was improved by using a CsI photocathode. The performance of the streak camera has been characterized and is summarized in this paper. The detector efficiency and cathode measurements are also presented.

  17. Video indirect ophthalmoscopy using a hand-held video camera.

    PubMed

    Shanmugam, Mahesh P

    2011-01-01

    Fundus photography in adults and cooperative children is possible with a fundus camera or by using a slit lamp-mounted digital camera. Retcam TM or a video indirect ophthalmoscope is necessary for fundus imaging in infants and young children under anesthesia. Herein, a technique of converting and using a digital video camera into a video indirect ophthalmoscope for fundus imaging is described. This device will allow anyone with a hand-held video camera to obtain fundus images. Limitations of this technique involve a learning curve and inability to perform scleral depression.

  18. Neutron camera employing row and column summations

    DOEpatents

    Clonts, Lloyd G.; Diawara, Yacouba; Donahue, Jr, Cornelius; Montcalm, Christopher A.; Riedel, Richard A.; Visscher, Theodore

    2016-06-14

    For each photomultiplier tube in an Anger camera, an R.times.S array of preamplifiers is provided to detect electrons generated within the photomultiplier tube. The outputs of the preamplifiers are digitized to measure the magnitude of the signals from each preamplifier. For each photomultiplier tube, a corresponding summation circuitry including R row summation circuits and S column summation circuits numerically add the magnitudes of the signals from preamplifiers for each row and for each column to generate histograms. For a P.times.Q array of photomultiplier tubes, P.times.Q summation circuitries generate P.times.Q row histograms including R entries and P.times.Q column histograms including S entries. The total set of histograms include P.times.Q.times.(R+S) entries, which can be analyzed by a position calculation circuit to determine the locations of events (detection of a neutron).

  19. The Near Infrared Camera (NIRCam) optics

    NASA Astrophysics Data System (ADS)

    Horner, S.; Rieke, M.; Kelly, D.; NIRCam Team

    2005-12-01

    The Near Infrared Camera (NIRCam) for NASA's James Webb Space Telescope (JWST) is one of the four science instruments installed into the Integrated Science Instrument Module (ISIM). NIRCam has very stringent requirements on its optical quality, both to meet the observatory requirement of diffaction-limited images at 2 microns, and because NIRCam is used as the wavefront sensing system in JWST to determine the overall observatory wavefront error. NIRCam operates over the waveband 0.6 to 5.0 microns, must operate with low wavefront error at 37K, and must meet these requirements after a warm launch. This poster is an overview of the NIRCam instrument's optical hardware and performance. The NIRCam instrument is funded by NASA/GSFC under contract NAS5-02105.

  20. Relevance of ellipse eccentricity for camera calibration

    NASA Astrophysics Data System (ADS)

    Mordwinzew, W.; Tietz, B.; Boochs, F.; Paulus, D.

    2015-05-01

    Plane circular targets are widely used within calibrations of optical sensors through photogrammetric set-ups. Due to this popularity, their advantages and disadvantages are also well studied in the scientific community. One main disadvantage occurs when the projected target is not parallel to the image plane. In this geometric constellation, the target has an elliptic geometry with an offset between its geometric and its projected center. This difference is referred to as ellipse eccentricity and is a systematic error which, if not treated accordingly, has a negative impact on the overall achievable accuracy. The magnitude and direction of eccentricity errors are dependent on various factors. The most important one is the target size. The bigger an ellipse in the image is, the bigger the error will be. Although correction models dealing with eccentricity have been available for decades, it is mostly seen as a planning task in which the aim is to choose the target size small enough so that the resulting eccentricity error remains negligible. Besides the fact that advanced mathematical models are available and that the influence of this error on camera calibration results is still not completely investigated, there are various additional reasons why bigger targets can or should not be avoided. One of them is the growing image resolution as a by-product from advancements in the sensor development. Here, smaller pixels have a lower S/N ratio, necessitating more pixels to assure geometric quality. Another scenario might need bigger targets due to larger scale differences whereas distant targets should still contain enough information in the image. In general, bigger ellipses contain more contour pixels and therefore more information. This supports the target-detection algorithms to perform better even at non-optimal conditions such as data from sensors with a high noise level. In contrast to rather simple measuring situations in a stereo or multi-image mode, the impact